![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
LangAgent is a versatile multi-agent system designed to streamline complex tasks such as research, automated code generation, logical reasoning, data analysis, and dynamic reporting. Powered by advanced language models, it integrates seamlessly with external APIs, databases, and a variety of data formats, enabling developers and professionals to automate workflows, extract insights, and generate professional-grade reports effortlessly.
LangAgent is a versatile multi-agent system designed to automate and streamline a wide range of complex tasks, including research, code generation, logical reasoning, data analysis, and dynamic reporting. Powered by advanced language models, LangAgent integrates seamlessly with APIs, databases, and various data formats, making it an essential tool for developers, data analysts, and business professionals to optimize workflows and extract valuable insights.
You can install the langagent
package via pip
:
pip install langagent==2.1.6
To start using the langagent library, you need to import the agents from their respective teams. Below is a detailed guide explaining each agent, its arguments, and how to effectively use them in your projects.
The Research Team consists of three agents: Researcher, Coder, and Weather. Each agent is designed to assist with gathering research, generating code, or fetching weather data.
Researcher Agent
Arguments:
llm
: The language model to be used (e.g., ChatOpenAI
).messages
: A list of HumanMessage
objects representing the research query.Example:
from langagent.research_team.agents import researcher
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
import yaml
llm = ChatOpenAI()
os.environ['TAVILY_API_KEY'] = yaml.safe_load(open('credentials.yml'))['online']
search = {
'api_key': os.environ.get("TAVILY_API_KEY"),
'max_results': 5,
'search_depth': "advanced"
}
researcher_agent = researcher.researcher(llm=llm, tavily_search=search)
# Invoke the agent with a query
result = researcher_agent.invoke({"messages": [HumanMessage(content="Who is Messi?")]})
print(result['output'])
Example of Resarcher Workflow
Coder Agent
Arguments:
llm
: The language model you want to use.messages
: A list of HumanMessage
objects representing the code generation task.Example:
from langagent.research_team.agents import coder
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
llm = ChatOpenAI()
coder_agent = coder(llm)
# Ask the agent to generate code
result = coder_agent.invoke({"messages": [HumanMessage(content="Write a Python function to calculate the factorial of a number.")]} )
print(result['output'])
Example of Coder Workflow
Weather Agent
Arguments:
llm
: The language model to be used.OPENWEATHERMAP_API_KEY
: Your OpenWeatherMap API key.messages
: A list of HumanMessage
objects representing the weather query.Example:
from langagent.research_team.agents import weather
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
llm = ChatOpenAI()
weather_agent = weather(llm, OPENWEATHERMAP_API_KEY="your-api-key-here")
# Ask for weather details
result = weather_agent.invoke({"messages": [HumanMessage(content="What is the weather today in New York?")]})
print(result['output'])
Example of Weather Workflow
The Logic Team consists of two agents: Reasoner and Calculator, both designed to solve logical and mathematical problems efficiently.
Reasoner Agent
Arguments:
llm
: The language model to be used.messages
: A list of HumanMessage
objects describing the logic problem.Example:
from langagent.logic_team.agents import reasoner
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
llm = ChatOpenAI()
reasoner_agent = reasoner(llm)
# Solve a logic problem
result = reasoner_agent.invoke({"messages": [HumanMessage(content="I have a 7 in the tens place. I have an even number in the ones place. I am lower than 74. What number am I?")]})
print(result['output'])
Example of Reasoner Workflow
Calculator Agent
Arguments:
llm
: The language model to be used.messages
: A list of HumanMessage
objects containing the mathematical query.Example:
from langagent.logic_team.agents import calculator
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
llm = ChatOpenAI()
calculator_agent = calculator(llm)
# Solve a mathematical problem
result = calculator_agent.invoke({"messages": [HumanMessage(content="Calculate the square root of 16")]})
print(result['output'])
Example of Calculator Workflow
The Analysis Team consists of two agents: Topic Generator and SQL Databaser, both designed to provide insights from data, either through text clustering or SQL-based analysis.
Topic Generator Agent
Arguments:
llm
: The language model to be used.path
: The path to the CSV file containing text data.text_column
: The column in the CSV that contains the text data.user_topics
: Optional, user-provided topics for clustering.Example:
from LangAgent.analysis_team.agents import topic_generator
from sentence_transformers import SentenceTransformer
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
topic_generator_agent = topic_generator(llm)
inputs = {
'path': '../Comments.csv',
'text_column': 'Comments',
'user_topics': None
}
result = topic_generator_agent.invoke(inputs)
result['summary_df'] # View the summarized data
result['fig'].show() # Show the visualization
Example of Topic Generator Workflow
SQL Databaser Agent
Arguments:
db_path
: The path to your SQLite or SQL database.llm
: The language model.user_question
: The question that drives the SQL query and chart generation.Example:
from langagent.analysis_team.agents import sql_databaser
from langchain_openai import ChatOpenAI
from langchain_experimental.utilities import PythonREPL
llm = ChatOpenAI()
PATH_DB = "sqlite:///database/leads_scored.db"
sql_databaser_agent = sql_databaser(db_path=PATH_DB, llm=llm)
question = """
What are the total sales by month-year?
Use suggested price as a proxy for revenue for each transaction and a quantity of 1.
Make a line chart of sales over time.
"""
initial_input = {
"user_question": question
}
# Invoke the agent with the input
result = sql_databaser_agent.invoke(initial_input)
print(result['sql_query']) # SQL Query
print(result['summary']) # Summary of results
# Execute the Python code for chart generation
repl = PythonREPL()
repl.run(result['chart_plotly_code'])
Example of SQL Databaser Workflow
The Reporting Team consists of two agents: Interpreter and Summarizer, both designed to provide insights and summaries of outputs from various sources such as data, charts, or documents.
Interpreter Agent
Arguments:
llm
: The language model.code_output
: The result or output that needs to be interpreted.Example:
from langagent.reporting_team.agents import interpreter
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
interpreter_agent = interpreter(llm)
result = interpreter_agent.invoke({"code_output": "Bar chart showing sales data."})
print(result['output'])
Example of Interpreter Workflow
Summarizer Agent
Arguments:
llm
: The language model.input_data
: The path to the document or text file that needs to be summarized.Example:
from langagent.reporting_team.agents import summarizer
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
summarizer_agent = summarizer(llm)
inputs = {
'input_data': '../Comments.csv'
}
result = summarizer_agent.invoke(inputs)
print(result['summary'])
Example of Summarizer Workflow
The Supervisor Chain manages the workflow between multiple agents. It selects the next agent based on predefined criteria or completion conditions. This is useful when you want to automate multi-step tasks and route tasks to different agents based on the flow of the conversation or task requirements.
Arguments:
subagent_names
: A list of subagents (workers) that will perform tasks.llm
: The language model that powers decision-making.subagent_descriptions
: A dictionary of subagent names with descriptions of their roles (optional).completion_criteria
: A string representing the criteria that signals completion of the workflow.history_depth
: The number of previous messages to consider when making routing decisions (optional).Example:
from langagent.supervisor import supervisor_chain
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
# Define the subagents and their descriptions
subagent_names = ["researcher", "coder", "calculator"]
subagent_descriptions = {
"researcher": "Finds relevant research and gathers data.",
"coder": "Generates, debugs, and optimizes code.",
"calculator": "Solves mathematical problems and queries."
}
# Create a supervisor chain to manage the workflow
supervisor = supervisor_chain(
subagent_names=subagent_names,
llm=llm,
subagent_descriptions=subagent_descriptions,
completion_criteria="FINISH"
)
Example of Supervision Workflow
LangAgent relies on several core libraries, including:
Install the required dependencies using:
pip install -r requirements.txt
We welcome contributions to LangAgent! If you'd like to contribute:
git checkout -b feature/new-feature
).git commit -m "Add new feature"
).git push origin feature/new-feature
).This project is licensed under the MIT License. See the LICENSE file for details.
For any inquiries or issues, please contact:
FAQs
LangAgent is a versatile multi-agent system designed to streamline complex tasks such as research, automated code generation, logical reasoning, data analysis, and dynamic reporting. Powered by advanced language models, it integrates seamlessly with external APIs, databases, and a variety of data formats, enabling developers and professionals to automate workflows, extract insights, and generate professional-grade reports effortlessly.
We found that langagent demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.