
Security News
vlt Launches "reproduce": A New Tool Challenging the Limits of Package Provenance
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
ai-assistant-manager
Advanced tools
This repository provides tools and services to manage OpenAI Assistants, including creating, listing, and deleting assistants, as well as handling vector stores and retrieval files.
AI Assistant Manager is an open-source tool designed to simplify the management of OpenAI Assistants. It provides a suite of tools and services for creating, listing, and deleting assistants, as well as handling vector stores and retrieval files. The project includes both end-to-end and unit tests, leveraging the Hatch build system for environment management and testing.
By automating the management of AI assistants and their associated resources, AI Assistant Manager streamlines workflows for developers working with OpenAI's API. It reduces the complexity involved in assistant lifecycle management, vector store handling, and file operations, allowing developers to focus on building intelligent applications without getting bogged down in infrastructure details.
AI Assistant Manager is available on PyPI and can be installed using pip
:
pip install ai-assistant-manager
For more details, visit the PyPI project page.
Clone the repository:
git clone https://github.com/DEV3L/ai-assistant-manager
cd ai-assistant-manager
Set up environment variables:
Copy the env.local
file to .env
and replace placeholders with your actual OpenAI API key:
cp env.local .env
Edit .env
to add your OPENAI_API_KEY
:
OPENAI_API_KEY=your_openai_api_key
Set up a virtual environment:
Install Hatch (if not already installed):
pip install hatch
Create and activate the virtual environment:
hatch env create
hatch shell
Configure the following environment variables in your .env
file:
OPENAI_API_KEY
: Your OpenAI API key.OPENAI_MODEL
: The model to use (default: gpt-4o-2024-08-06
).ASSISTANT_DESCRIPTION
: Description of the assistant (default: AI Assistant Manager
).ASSISTANT_NAME
: Name of the assistant (default: AI Assistant Manager
).BIN_DIR
: Directory for binaries (default: bin
).DATA_DIR
: Directory for data files (default: data
).DATA_FILE_PREFIX
: Prefix for data files (default: AI Assistant Manager
).To see AI Assistant Manager in action, you can run the provided example script:
from loguru import logger
from ai_assistant_manager.assistants.assistant_service import AssistantService
from ai_assistant_manager.chats.chat import Chat
from ai_assistant_manager.clients.openai_api import OpenAIClient, build_openai_client
from ai_assistant_manager.env_variables import set_env_variables
from ai_assistant_manager.exporters.directory.directory_exporter import DirectoryExporter
from ai_assistant_manager.exporters.files.files_exporter import FilesExporter
from ai_assistant_manager.prompts.prompt import get_prompt
def main():
DirectoryExporter("directory").export()
FilesExporter("about.txt").export()
assistant_name = "AI-Assistant-Manager-Test"
logger.info(f"Building {assistant_name}")
client = OpenAIClient(build_openai_client())
service = AssistantService(client, get_prompt())
logger.info("Removing existing assistant and category files")
service.delete_assistant()
assistant_id = service.get_assistant_id()
logger.info(f"Assistant ID: {assistant_id}")
chat = Chat(client, assistant_id)
chat.start()
message = "What is the AI Assistant Manager?"
print(f"\nMessage:\n{message}")
chat_response = chat.send_user_message(message)
print(f"\n{service.assistant_name}:\n{chat_response.message}")
print(f"\nTokens: {chat_response.token_count}")
service.delete_assistant()
if __name__ == "__main__":
try:
set_env_variables()
main()
except Exception as e:
logger.info(f"Error: {e}")
python run_end_to_end.py
This script will:
Run End-to-End Test:
hatch run e2e
Run Unit Tests:
hatch run test
Publish Package to PyPI:
hatch run publish
Note: These scripts are defined in pyproject.toml
under [tool.hatch.envs.default.scripts]
.
Run the end-to-end test to ensure the tool works as expected:
hatch run e2e
To run unit tests:
hatch run test
Coverage reports are generated using pytest-cov
.
To monitor code coverage in VSCode:
Install the Coverage Gutters extension.
Run:
Command + Shift + P => Coverage Gutters: Watch
ai-assistant-manager/
├── ai_assistant_manager/
│ ├── assistants/
│ │ └── assistant_service.py
│ ├── chats/
│ │ ├── chat.py
│ │ └── chat_response.py
│ ├── clients/
│ │ └── openai_api.py
│ ├── exporters/
│ │ ├── directory/
│ │ │ └── directory_exporter.py
│ │ ├── files/
│ │ │ └── files_exporter.py
│ │ └── exporter.py
│ ├── prompts/
│ │ ├── sample_prompt.md
│ │ └── prompt.py
│ ├── content_data.py
│ ├── env_variables.py
│ └── encoding.py
├── tests/
│ ├── assistants/
│ │ └── assistant_service_test.py
│ ├── chats/
│ │ ├── chat_test.py
│ │ └── chat_response_test.py
│ ├── clients/
│ │ └── openai_api_test.py
│ ├── exporters/
│ │ ├── directory/
│ │ │ └── directory_exporter_test.py
│ │ ├── files/
│ │ │ └── files_exporter_test.py
│ │ └── exporter_test.py
│ ├── prompts/
│ │ └── prompt_test.py
│ ├── env_variables_test.py
│ └── timer_test.py
├── .env.default
├── pyproject.toml
├── README.md
├── run_end_to_end.py
├── LICENSE
We welcome contributions! Please follow these steps:
Fork the repository on GitHub.
Create a new branch for your feature or bugfix:
git checkout -b feature/your-feature-name
Make your changes and commit them with clear messages.
Run tests to ensure nothing is broken:
hatch run test
Push to your fork and submit a pull request to the main
branch.
By participating in this project, you agree to abide by the following guidelines:
This project is licensed under the MIT License. See the LICENSE file for details.
FAQs
This repository provides tools and services to manage OpenAI Assistants, including creating, listing, and deleting assistants, as well as handling vector stores and retrieval files.
We found that ai-assistant-manager demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
Research
Security News
Socket researchers uncovered a malicious PyPI package exploiting Deezer’s API to enable coordinated music piracy through API abuse and C2 server control.
Research
The Socket Research Team discovered a malicious npm package, '@ton-wallet/create', stealing cryptocurrency wallet keys from developers and users in the TON ecosystem.