English | 中文 | 日本語
AgentScope
Start building LLM-empowered multi-agent applications in an easier way.


-
If you find our work helpful, please kindly cite our paper.
-
Visit our workstation to build multi-agent applications with dragging-and-dropping.
- Welcome to join our community on
News
-
[2024-07-15] AgentScope has implemented the Mixture-of-Agents algorithm. Refer to our MoA example for more details.
-
[2024-06-14] A new prompt tuning module is available in AgentScope to help developers generate and optimize the agents' system prompts! Refer to our tutorial for more details!
-
[2024-06-11] The RAG functionality is available for agents in AgentScope now! A quick introduction to RAG in AgentScope can help you equip your agent with external knowledge!
-
[2024-06-09] We release AgentScope v0.0.5 now! In this new version, AgentScope Workstation (the online version is running on agentscope.io) is open-sourced with the refactored AgentScope Studio!
Full News
-
[2024-05-24] We are pleased to announce that features related to the AgentScope Workstation will soon be open-sourced! The online website services are temporarily offline. The online website service will be upgraded and back online shortly. Stay tuned...
-
[2024-05-15] A new Parser Module for formatted response is added in AgentScope! Refer to our tutorial for more details. The DictDialogAgent
and werewolf game example are updated simultaneously.
https://github.com/qbc2016/AgentScope/assets/22984042/22d45aee-3470-4923-850f-348a5b0faaa7
-
[2024-05-14] Dear AgentScope users, we are conducting a survey on AgentScope Workstation & Copilot user experience. We currently need your valuable feedback to help us improve the experience of AgentScope's Drag & Drop multi-agent application development and Copilot. Your feedback is valuable and the survey will take about 3~5 minutes. Please click URL to participate in questionnaire surveys. Thank you very much for your support and contribution!
-
[2024-05-14] AgentScope supports gpt-4o as well as other OpenAI vision models now! Try gpt-4o with its model configuration and new example Conversation with gpt-4o!
-
[2024-04-30] We release AgentScope v0.0.4 now!
-
[2024-04-27] AgentScope Workstation is now online! You are welcome to try building your multi-agent application simply with our drag-and-drop platform and ask our copilot questions about AgentScope!
-
[2024-04-19] AgentScope supports Llama3 now! We provide scripts and example model configuration for quick set-up. Feel free to try llama3 in our examples!
-
[2024-04-06] We release AgentScope v0.0.3 now!
-
[2024-04-06] New examples Gomoku, Conversation with ReAct Agent, Conversation with RAG Agent and Distributed Parallel Optimization are available now!
-
[2024-03-19] We release AgentScope v0.0.2 now! In this new version,
AgentScope supports ollama(A local CPU inference engine), DashScope and Google Gemini APIs.
-
[2024-03-19] New examples "Autonomous Conversation with Mentions" and "Basic Conversation with LangChain library" are available now!
-
[2024-03-19] The Chinese tutorial of AgentScope is online now!
-
[2024-02-27] We release AgentScope v0.0.1 now, which is also
available in PyPI!
-
[2024-02-14] We release our paper "AgentScope: A Flexible yet Robust
Multi-Agent Platform" in arXiv now!
What's AgentScope?
AgentScope is an innovative multi-agent platform designed to empower developers
to build multi-agent applications with large-scale models.
It features three high-level capabilities:
-
🤝 Easy-to-Use: Designed for developers, with fruitful components,
comprehensive documentation, and broad compatibility. Besides, AgentScope Workstation provides a drag-and-drop programming platform and a copilot for beginners of AgentScope!
-
✅ High Robustness: Supporting customized fault-tolerance controls and
retry mechanisms to enhance application stability.
-
🚀 Actor-Based Distribution: Building distributed multi-agent
applications in a centralized programming manner for streamlined development.
Supported Model Libraries
AgentScope provides a list of ModelWrapper
to support both local model
services and third-party model APIs.
API | Task | Model Wrapper | Configuration | Some Supported Models |
---|
OpenAI API | Chat | OpenAIChatWrapper | guidance template | gpt-4o, gpt-4, gpt-3.5-turbo, ... |
| Embedding | OpenAIEmbeddingWrapper | guidance template | text-embedding-ada-002, ... |
| DALL·E | OpenAIDALLEWrapper | guidance template | dall-e-2, dall-e-3 |
DashScope API | Chat | DashScopeChatWrapper | guidance template | qwen-plus, qwen-max, ... |
| Image Synthesis | DashScopeImageSynthesisWrapper | guidance template | wanx-v1 |
| Text Embedding | DashScopeTextEmbeddingWrapper | guidance template | text-embedding-v1, text-embedding-v2, ... |
| Multimodal | DashScopeMultiModalWrapper | guidance template | qwen-vl-max, qwen-vl-chat-v1, qwen-audio-chat |
Gemini API | Chat | GeminiChatWrapper | guidance template | gemini-pro, ... |
| Embedding | GeminiEmbeddingWrapper | guidance template | models/embedding-001, ... |
ZhipuAI API | Chat | ZhipuAIChatWrapper | guidance template | glm-4, ... |
| Embedding | ZhipuAIEmbeddingWrapper | guidance template | embedding-2, ... |
ollama | Chat | OllamaChatWrapper | guidance template | llama3, llama2, Mistral, ... |
| Embedding | OllamaEmbeddingWrapper | guidance template | llama2, Mistral, ... |
| Generation | OllamaGenerationWrapper | guidance template | llama2, Mistral, ... |
LiteLLM API | Chat | LiteLLMChatWrapper | guidance template | models supported by litellm... |
Yi API | Chat | YiChatWrapper | guidance template | yi-large, yi-medium, ... |
Post Request based API | - | PostAPIModelWrapper | guidance template | - |
Supported Local Model Deployment
AgentScope enables developers to rapidly deploy local model services using
the following libraries.
Supported Services
- Web Search
- Data Query
- Retrieval
- Code Execution
- File Operation
- Text Processing
- Multi Modality
- Wikipedia Search and Retrieval
- TripAdvisor Search
- Web Browser Control
Example Applications
-
Model
-
Conversation
-
Game
-
Distribution
More models, services and examples are coming soon!
Installation
AgentScope requires Python 3.9 or higher.
Note: This project is currently in active development, it's recommended to
install AgentScope from source.
From source
- Install AgentScope in editable mode:
git clone https://github.com/modelscope/agentscope.git
cd agentscope
pip install -e .
Using pip
- Install AgentScope from pip:
pip install agentscope
To support different deployment scenarios, AgentScope provides several
optional dependencies. Full list of optional dependencies refers to
tutorial
Taking distribution mode as an example, you can install its dependencies
as follows:
On Windows
pip install -e .[distribute]
pip install agentscope[distribute]
On Mac & Linux
pip install -e .\[distribute\]
pip install agentscope\[distribute\]
Quick Start
Configuration
In AgentScope, the model deployment and invocation are decoupled by
ModelWrapper
.
To use these model wrappers, you need to prepare a model config file as
follows.
model_config = {
"config_name": "{your_config_name}",
"model_type": "{model_type}",
}
Taking OpenAI Chat API as an example, the model configuration is as follows:
openai_model_config = {
"config_name": "my_openai_config",
"model_type": "openai_chat",
"model_name": "gpt-4",
"api_key": "xxx",
"organization": "xxx",
}
More details about how to set up local model services and prepare model
configurations is in our
tutorial.
Create Agents
Create built-in user and assistant agents as follows.
from agentscope.agents import DialogAgent, UserAgent
import agentscope
agentscope.init(model_configs="./model_configs.json")
dialog_agent = DialogAgent(name="assistant",
model_config_name="my_openai_config")
user_agent = UserAgent()
Construct Conversation
In AgentScope, message is the bridge among agents, which is a
dict that contains two necessary fields name
and content
and an
optional field url
to local files (image, video or audio) or website.
from agentscope.message import Msg
x = Msg(name="Alice", content="Hi!")
x = Msg("Bob", "What about this picture I took?", url="/path/to/picture.jpg")
Start a conversation between two agents (e.g. dialog_agent and user_agent)
with the following code:
x = None
while True:
x = dialog_agent(x)
x = user_agent(x)
if x.content == "exit":
break
AgentScope Studio
AgentScope provides an easy-to-use runtime user interface capable of
displaying multimodal output on the front end, including text, images,
audio and video.
Refer to our tutorial for more details.
Tutorial
License
AgentScope is released under Apache License 2.0.
Contributing
Contributions are always welcomed!
We provide a developer version with additional pre-commit hooks to perform
checks compared to the official version:
pip install -e .[dev]
pip install -e .\[dev\]
pre-commit install
Please refer to our Contribution Guide for more details.
Publications
If you find our work helpful for your research or application, please cite our papers.
-
AgentScope: A Flexible yet Robust Multi-Agent Platform
@article{agentscope,
author = {Dawei Gao and
Zitao Li and
Xuchen Pan and
Weirui Kuang and
Zhijian Ma and
Bingchen Qian and
Fei Wei and
Wenhao Zhang and
Yuexiang Xie and
Daoyuan Chen and
Liuyi Yao and
Hongyi Peng and
Ze Yu Zhang and
Lin Zhu and
Chen Cheng and
Hongzhu Shi and
Yaliang Li and
Bolin Ding and
Jingren Zhou}
title = {AgentScope: A Flexible yet Robust Multi-Agent Platform},
journal = {CoRR},
volume = {abs/2402.14034},
year = {2024},
}