
Product
A New Overview in our Dashboard
We redesigned Socket's first logged-in page to display rich and insightful visualizations about your repositories protected against supply chain threats.
ai-assistant-manager
Advanced tools
This repository provides tools and services to manage OpenAI Assistants, including creating, listing, and deleting assistants, as well as handling vector stores and retrieval files.
AI Assistant Manager is an open-source tool designed to simplify the management of OpenAI Assistants. It provides a suite of tools and services for creating, listing, and deleting assistants, as well as handling vector stores and retrieval files. The project includes both end-to-end and unit tests, leveraging the Hatch build system for environment management and testing.
By automating the management of AI assistants and their associated resources, AI Assistant Manager streamlines workflows for developers working with OpenAI's API. It reduces the complexity involved in assistant lifecycle management, vector store handling, and file operations, allowing developers to focus on building intelligent applications without getting bogged down in infrastructure details.
requires_action
tool calls from OpenAI, enabling dynamic responses based on assistant actions.AI Assistant Manager is available on PyPI and can be installed using pip
:
pip install ai-assistant-manager
For more details, visit the PyPI project page.
Clone the repository:
git clone https://github.com/DEV3L/ai-assistant-manager
cd ai-assistant-manager
Set up environment variables:
Copy the env.local
file to .env
and replace placeholders with your actual OpenAI API key:
cp env.local .env
Edit .env
to add your OPENAI_API_KEY
:
OPENAI_API_KEY=your_openai_api_key
Set up a virtual environment:
Install Hatch (if not already installed):
pip install hatch
Create and activate the virtual environment:
hatch env create
hatch shell
Configure the following environment variables in your .env
file:
OPENAI_API_KEY
: Your OpenAI API key.OPENAI_MODEL
: The model to use (default: gpt-4o-2024-08-06
).ASSISTANT_DESCRIPTION
: Description of the assistant (default: AI Assistant Manager
).ASSISTANT_NAME
: Name of the assistant (default: AI Assistant Manager
).BIN_DIR
: Directory for binaries (default: bin
).DATA_DIR
: Directory for data files (default: data
).DATA_FILE_PREFIX
: Prefix for data files (default: AI Assistant Manager
).To see AI Assistant Manager in action, you can run the provided example scripts:
from loguru import logger
from ai_assistant_manager.assistants.assistant_service import AssistantService
from ai_assistant_manager.chats.chat import Chat
from ai_assistant_manager.clients.openai_api import OpenAIClient, build_openai_client
from ai_assistant_manager.env_variables import set_env_variables
from ai_assistant_manager.exporters.directory.directory_exporter import DirectoryExporter
from ai_assistant_manager.exporters.files.files_exporter import FilesExporter
from ai_assistant_manager.prompts.prompt import get_prompt
def main():
DirectoryExporter("directory").export()
FilesExporter("about.txt").export()
assistant_name = "AI-Assistant-Manager-Test"
logger.info(f"Building {assistant_name}")
client = OpenAIClient(build_openai_client())
service = AssistantService(client, prompt=get_prompt())
logger.info("Removing existing assistant and category files")
service.delete_assistant()
assistant_id = service.get_assistant_id()
logger.info(f"Assistant ID: {assistant_id}")
chat = Chat(client, assistant_id)
chat.start()
message = "What is the AI Assistant Manager?"
print(f"\nMessage:\n{message}")
chat_response = chat.send_user_message(message)
print(f"\n{service.assistant_name}:\n{chat_response.message}")
print(f"\nTokens: {chat_response.token_count}")
service.delete_assistant()
if __name__ == "__main__":
try:
set_env_variables()
main()
except Exception as e:
logger.info(f"Error: {e}")
To utilize the new feature of hooking into requires_action
tool calls, run the enhanced example script:
from loguru import logger
from ai_assistant_manager.assistants.assistant_service import (
RETRIEVAL_TOOLS,
AssistantService,
)
from ai_assistant_manager.chats.chat import Chat, RequiresActionException
from ai_assistant_manager.clients.openai_api import OpenAIClient, build_openai_client
from ai_assistant_manager.env_variables import ENV_VARIABLES, set_env_variables
from ai_assistant_manager.exporters.directory.directory_exporter import DirectoryExporter
from ai_assistant_manager.exporters.files.files_exporter import FilesExporter
from ai_assistant_manager.prompts.prompt import SAMPLE_PROMPT_PATH_WITH_TOOLS, get_prompt
from ai_assistant_manager.tools.tools import get_tools
from ai_assistant_manager.tools.weather import get_weather
assistant_name = "AI-Assistant-Manager-Tool-Test"
def main():
DirectoryExporter("directory").export()
FilesExporter("about.txt").export()
logger.info(f"Building {assistant_name}")
tools_from_file = get_tools()
tools_from_file.extend(RETRIEVAL_TOOLS)
client = OpenAIClient(build_openai_client())
service = AssistantService(client, prompt=get_prompt(prompt_path=SAMPLE_PROMPT_PATH_WITH_TOOLS), tools=tools_from_file)
logger.info("Removing existing assistant and category files")
service.delete_assistant()
assistant_id = service.get_assistant_id()
logger.info(f"Assistant ID: {assistant_id}")
chat = Chat(client, assistant_id)
chat.start()
message = "What is the weather like today?"
print(f"\nMessage:\n{message}")
try:
chat_response = chat.send_user_message(message)
assert False
except RequiresActionException as e:
print(f"\n{service.assistant_name}:\nTOOL_CALL: {e.data}")
weather_result = get_weather(e.data.arguments["location"])
print(weather_result)
chat_response = chat.submit_tool_outputs(e.data.run_id, e.data.tool_call_id, weather_result)
print(f"\n{service.assistant_name}:\n{chat_response.message}")
print(f"\nTokens: {chat_response.token_count}")
service.delete_assistant()
if __name__ == "__main__":
try:
set_env_variables()
ENV_VARIABLES.assistant_name = assistant_name
main()
except Exception as e:
logger.info(f"Error: {e}")
python run_end_to_end.py
This script will:
hatch run e2e_with_tools
This script demonstrates the new feature by:
requires_action
tool calls.Run End-to-End Test:
hatch run e2e
Run End-to-End Test with Tools:
hatch run e2e_with_tools
Run Unit Tests:
hatch run test
Publish Package to PyPI:
hatch run publish
Note: These scripts are defined in pyproject.toml
under [tool.hatch.envs.default.scripts]
.
Run the end-to-end test to ensure the tool works as expected:
hatch run e2e
To test the new requires_action
tool call feature:
hatch run e2e_with_tools
To run unit tests:
hatch run test
Coverage reports are generated using pytest-cov
.
To monitor code coverage in VSCode:
Install the Coverage Gutters extension.
Run:
Command + Shift + P => Coverage Gutters: Watch
ai-assistant-manager/
├── ai_assistant_manager/
│ ├── assistants/
│ │ └── assistant_service.py
│ ├── chats/
│ │ ├── chat.py
│ │ └── chat_response.py
│ ├── clients/
│ │ └── openai_api.py
│ ├── exporters/
│ │ ├── directory/
│ │ │ └── directory_exporter.py
│ │ ├── files/
│ │ │ └── files_exporter.py
│ │ └── exporter.py
│ ├── prompts/
│ │ ├── sample_prompt.md
│ │ ├── sample_prompt_with_tool_call.md
│ │ └── prompt.py
│ ├── tools/
│ │ ├── tools.py
│ │ └── weather.py
│ ├── content_data.py
│ ├── env_variables.py
│ └── encoding.py
├── tests/
│ ├── assistants/
│ │ └── assistant_service_test.py
│ ├── chats/
│ │ ├── chat_test.py
│ │ └── chat_response_test.py
│ ├── clients/
│ │ └── openai_api_test.py
│ ├── exporters/
│ │ ├── directory/
│ │ │ └── directory_exporter_test.py
│ │ ├── files/
│ │ │ └── files_exporter_test.py
│ │ └── exporter_test.py
│ ├── prompts/
│ │ └── prompt_test.py
│ ├── tools/
│ │ └── tools_test.py
│ ├── env_variables_test.py
│ └── timer_test.py
├── .env.default
├── pyproject.toml
├── README.md
├── run_end_to_end.py
├── run_end_to_end_with_tools.py
├── LICENSE
requires_action
tool calls.We welcome contributions! Please follow these steps:
Fork the repository on GitHub.
Create a new branch for your feature or bugfix:
git checkout -b feature/your-feature-name
Make your changes and commit them with clear messages.
Run tests to ensure nothing is broken:
hatch run test
Push to your fork and submit a pull request to the main
branch.
By participating in this project, you agree to abide by the following guidelines:
This project is licensed under the MIT License. See the LICENSE file for details.
FAQs
This repository provides tools and services to manage OpenAI Assistants, including creating, listing, and deleting assistants, as well as handling vector stores and retrieval files.
We found that ai-assistant-manager demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
We redesigned Socket's first logged-in page to display rich and insightful visualizations about your repositories protected against supply chain threats.
Product
Automatically fix and test dependency updates with socket fix—a new CLI tool that turns CVE alerts into safe, automated upgrades.
Security News
CISA denies CVE funding issues amid backlash over a new CVE foundation formed by board members, raising concerns about transparency and program governance.