Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Langroid
is an intuitive, lightweight, extensible and principled
Python framework to easily build LLM-powered applications, from CMU and UW-Madison researchers.
You set up Agents, equip them with optional components (LLM,
vector-store and tools/functions), assign them tasks, and have them
collaboratively solve a problem by exchanging messages.
This Multi-Agent paradigm is inspired by the
Actor Framework
(but you do not need to know anything about this!).
Langroid
is a fresh take on LLM app-development, where considerable thought has gone
into simplifying the developer experience;
it does not use Langchain
, or any other LLM framework.
:fire: Read the (WIP) overview of the langroid architecture
š¢ Companies are using/adapting Langroid in production. Here is a quote:
Nullify uses AI Agents for secure software development. It finds, prioritizes and fixes vulnerabilities. We have internally adapted Langroid's multi-agent orchestration framework in production, after evaluating CrewAI, Autogen, LangChain, Langflow, etc. We found Langroid to be far superior to those frameworks in terms of ease of setup and flexibility. Langroid's Agent and Task abstractions are intuitive, well thought out, and provide a great developer experience. We wanted the quickest way to get something in production. With other frameworks it would have taken us weeks, but with Langroid we got to good results in minutes. Highly recommended!
-- Jacky Wong, Head of AI at Nullify.
:fire: See this Intro to Langroid blog post from the LanceDB team
:fire: Just published in ML for Healthcare (2024): a Langroid-based Multi-Agent RAG system for pharmacovigilance, see blog post
We welcome contributions: See the contributions document for ideas on what to contribute.
Are you building LLM Applications, or want help with Langroid for your company, or want to prioritize Langroid features for your company use-cases? Prasad Chalasani is available for consulting (advisory/development): pchalasani at gmail dot com.
Sponsorship is also accepted via GitHub Sponsors
Questions, Feedback, Ideas? Join us on Discord!
This is just a teaser; there's much more, like function-calling/tools, Multi-Agent Collaboration, Structured Information Extraction, DocChatAgent (RAG), SQLChatAgent, non-OpenAI local/remote LLMs, etc. Scroll down or see docs for more. See the Langroid Quick-Start Colab that builds up to a 2-agent information-extraction example using the OpenAI ChatCompletion API. See also this version that uses the OpenAI Assistants API instead.
:fire: just released! Example script showing how you can use Langroid multi-agents and tools to extract structured information from a document using only a local LLM (Mistral-7b-instruct-v0.2).
import langroid as lr
import langroid.language_models as lm
# set up LLM
llm_cfg = lm.OpenAIGPTConfig( # or OpenAIAssistant to use Assistant API
# any model served via an OpenAI-compatible API
chat_model=lm.OpenAIChatModel.GPT4o, # or, e.g., "ollama/mistral"
)
# use LLM directly
mdl = lm.OpenAIGPT(llm_cfg)
response = mdl.chat("What is the capital of Ontario?", max_tokens=10)
# use LLM in an Agent
agent_cfg = lr.ChatAgentConfig(llm=llm_cfg)
agent = lr.ChatAgent(agent_cfg)
agent.llm_response("What is the capital of China?")
response = agent.llm_response("And India?") # maintains conversation state
# wrap Agent in a Task to run interactive loop with user (or other agents)
task = lr.Task(agent, name="Bot", system_message="You are a helpful assistant")
task.run("Hello") # kick off with user saying "Hello"
# 2-Agent chat loop: Teacher Agent asks questions to Student Agent
teacher_agent = lr.ChatAgent(agent_cfg)
teacher_task = lr.Task(
teacher_agent, name="Teacher",
system_message="""
Ask your student concise numbers questions, and give feedback.
Start with a question.
"""
)
student_agent = lr.ChatAgent(agent_cfg)
student_task = lr.Task(
student_agent, name="Student",
system_message="Concisely answer the teacher's questions.",
single_round=True,
)
teacher_task.add_sub_task(student_task)
teacher_task.run()
Agent
s with strict JSON schema output format on compatible LLMs and strict mode for the OpenAI tools API.Qwen2.5-Coder-32b-Instruct
) hosted on glhf.chato1-mini
and o1-preview
models.DocChatAgent
uses Reciprocal Rank Fusion (RRF) to rank chunks retrieved by different methods.run_batch_task
new option -- stop_on_first_result
- allows termination of batch as soon as any task returns a result.RewindTool
,
that lets an agent "rewind and redo" a past message (and all dependent messages are cleared out
thanks to the lineage tracking). Read notes here.doc-chat
, db
(for database-related dependencies). See updated
install instructions below and in the docs.examples
,
and a random example from this list would be used to generate a 1-shot example
for the LLM. This has been improved so you can now supply a list of examples
where each example is either a tool instance, or a tuple of (description,
tool instance), where the description is a "thought" that leads the LLM to use
the tool (see example in the docs). In some scenarios this can improve LLM tool
generation accuracy. Also, now instead of a random example, ALL examples are used to generate few-shot
examples.TaskConfig
. Only detects exact loops, rather than approximate loops where the entities are saying essentially similar (but not exactly the same) things repeatedly.RecipientTool
mechanism, with the tradeoff that
since it's not a tool, there's no way to enforce/remind the LLM to explicitly
specify an addressee (in scenarios where this is important).DocChatAgent
.gpt-4o
is now the default LLM throughout; Update tests and examples to work
with this LLM; use tokenizer corresponding to the LLM.gemini 1.5 pro
support via litellm
QdrantDB:
update to support learned sparse embeddings.chat_model="groq/llama3-8b-8192"
.
See tutorial.Task.run(), Task.run_async(), run_batch_tasks
have max_cost
and max_tokens
params to exit when tokens or cost exceed a limit. The result
ChatDocument.metadata
now includes a status
field which is a code indicating a
task completion reason code. Also task.run()
etc can be invoked with an explicit
session_id
field which is used as a key to look up various settings in Redis cache.
Currently only used to look up "kill status" - this allows killing a running task, either by task.kill()
or by the classmethod Task.kill_session(session_id)
.
For example usage, see the test_task_kill
in tests/main/test_task.pyDocChatAgent
, see the
test_doc_chat_agent.py
in particular the test_doc_chat_batch()
;
New task run utility: run_batch_task_gen
where a task generator can be specified, to generate one task per input.DocChatAgent
will now work with image-pdfs).DocChatAgent
context-window fixesURLLoader
: detect file time from header when URL doesn't end with a
recognizable suffix like .pdf
, .docx
, etc.sentence_transformer
module is available.unstructured
,
haystack
, chromadb
, mkdocs
, huggingface-hub
, sentence-transformers
.import langroid as lr
chat_model="ollama/mistral"
. See release notes.SQLChatAgent
works).
See example script using this Agent to answer questions about Python pkg dependencies..doc
file parsing (in addition to .docx
)formatter
param
in OpenAIGPTConfig
to ensure accurate chat formatting for local LLMs.DocChatAgentConfig
has a new param: add_fields_to_content
, to specify additional document fields to insert into
the main content
field, to help improve retrieval.DocChatAgent
: Ingest Pandas dataframes and filtering.LanceDocChatAgent
leverages LanceDB
vector-db for efficient vector search
and full-text search and filtering.LanceRAGTaskCreator
to create a 2-agent system consisting of a LanceFilterAgent
that
decides a filter and rephrase query to send to a RAG agent.Task
initialization with default ChatAgent
.0.1.126: OpenAIAssistant agent: Caching Support.
0.1.117: Support for OpenAI Assistant API tools: Function-calling, Code-intepreter, and Retriever (RAG), file uploads. These work seamlessly with Langroid's task-orchestration. Until docs are ready, it's best to see these usage examples:
0.1.112: OpenAIAssistant
is a subclass of ChatAgent
that
leverages the new OpenAI Assistant API. It can be used as a drop-in
replacement for ChatAgent
, and relies on the Assistant API to
maintain conversation state, and leverages persistent threads and
assistants to reconnect to them if needed. Examples:
test_openai_assistant.py
,
test_openai_assistant_async.py
0.1.111: Support latest OpenAI model: GPT4_TURBO
(see test_llm.py for example usage)
0.1.110: Upgrade from OpenAI v0.x to v1.1.1 (in preparation for
Assistants API and more); (litellm
temporarily disabled due to OpenAI
version conflict).
DocChatAgent
re-rankers: rank_with_diversity
, rank_to_periphery
(lost in middle).DocChatAgentConfig.n_neighbor_chunks > 0
allows returning context chunks around match.DocChatAgent
uses RelevanceExtractorAgent
to have
the LLM extract relevant portions of a chunk using
sentence-numbering, resulting in huge speed up and cost reduction
compared to the naive "sentence-parroting" approach (writing out full
sentences out relevant whole sentences) which LangChain
uses in their
LLMChainExtractor
.import langroid as lr
. See the documentation for usage.docx
files (preliminary).SQLChatAgent
that efficiently retrieves relevant schema info when translating natural language to SQL.GoogleSearchTool
to enable Agents (their LLM) to do Google searches via function-calling/tools.
See this chat example for how easy it is to add this tool to an agent.SQLChatAgent
-- thanks to our latest contributor Rithwik Babu!TableChatAgent
to
chat with tabular datasets (dataframes, files, URLs): LLM generates Pandas code,
and code is executed using Langroid's tool/function-call mechanism.DocChatAgent
now accepts PDF files or URLs.Suppose you want to extract structured information about the key terms of a commercial lease document. You can easily do this with Langroid using a two-agent system, as we show in the langroid-examples repo. (See this script for a version with the same functionality using a local Mistral-7b model.) The demo showcases just a few of the many features of Langroid, such as:
LeaseExtractor
is in charge of the task, and its LLM (GPT4) generates questions
to be answered by the DocAgent
.DocAgent
LLM (GPT4) uses retrieval from a vector-store to
answer the LeaseExtractor
's questions, cites the specific excerpt supporting the answer.LeaseExtractor
LLM presents the information in a structured
format using a Function-call.Here is what it looks like in action (a pausable mp4 video is here).
(For a more up-to-date list see the Updates/Releases section above)
Task.run()
method has the same
type-signature as an Agent's responder's methods, and this is key to how
a task of an agent can delegate to other sub-tasks: from the point of view of a Task,
sub-tasks are simply additional responders, to be used in a round-robin fashion
after the agent's own responders.Agent
and Task
abstractions allow users to design
Agents with specific skills, wrap them in Tasks, and combine tasks in a flexible way.ToolMessage
mechanism which works with
any LLM, not just OpenAI's.
Function calling and tools have the same developer-facing interface, implemented
using Pydantic,
which makes it very easy to define tools/functions and enable agents
to use them. Benefits of using Pydantic are that you never have to write
complex JSON specs for function calling, and when the LLM
hallucinates malformed JSON, the Pydantic error message is sent back to
the LLM so it can fix it.langroid
Langroid requires Python 3.11+. We recommend using a virtual environment.
Use pip
to install a bare-bones slim version of langroid
(from PyPi) to your virtual
environment:
pip install langroid
The core Langroid package lets you use OpenAI Embeddings models via their API.
If you instead want to use the sentence-transformers
embedding models from HuggingFace,
install Langroid like this:
pip install "langroid[hf-embeddings]"
For many practical scenarios, you may need additional optional dependencies:
doc-chat
extra:
pip install "langroid[doc-chat]"
db
extra:
pip install "langroid[db]"
``
pip install "langroid[doc-chat,db]"
all
extra (but note that this will result in longer load/startup times and a larger install size):
pip install "langroid[all]"
If you are using SQLChatAgent
(e.g. the script examples/data-qa/sql-chat/sql_chat.py
),
with a postgres db, you will need to:
sudo apt-get install libpq-dev
on Ubuntu,brew install postgresql
on Mac, etc.pip install langroid[postgres]
or poetry add langroid[postgres]
or poetry install -E postgres
.
If this gives you an error, try pip install psycopg2-binary
in your virtualenv.:memo: If you get strange errors involving mysqlclient
, try doing pip uninstall mysqlclient
followed by pip install mysqlclient
.
To get started, all you need is an OpenAI API Key. If you don't have one, see this OpenAI Page. (Note that while this is the simplest way to get started, Langroid works with practically any LLM, not just those from OpenAI. See the guides to using Open/Local LLMs, and other non-OpenAI proprietary LLMs.)
In the root of the repo, copy the .env-template
file to a new file .env
:
cp .env-template .env
Then insert your OpenAI API Key.
Your .env
file should look like this (the organization is optional
but may be required in some scenarios).
OPENAI_API_KEY=your-key-here-without-quotes
OPENAI_ORGANIZATION=optionally-your-organization-id
Alternatively, you can set this as an environment variable in your shell (you will need to do this every time you open a new shell):
export OPENAI_API_KEY=your-key-here-without-quotes
All of the following environment variable settings are optional, and some are only needed to use specific features (as noted below).
.env
file, as the value of MOMENTO_AUTH_TOKEN
(see example file below),.env
file set CACHE_TYPE=momento
(instead of CACHE_TYPE=redis
which is the default).GoogleSearchTool
.
To use Google Search as an LLM Tool/Plugin/function-call,
you'll need to set up
a Google API key,
then setup a Google Custom Search Engine (CSE) and get the CSE ID.
(Documentation for these can be challenging, we suggest asking GPT4 for a step-by-step guide.)
After obtaining these credentials, store them as values of
GOOGLE_API_KEY
and GOOGLE_CSE_ID
in your .env
file.
Full documentation on using this (and other such "stateless" tools) is coming soon, but
in the meantime take a peek at this chat example, which
shows how you can easily equip an Agent with a GoogleSearchtool
.If you add all of these optional variables, your .env
file should look like this:
OPENAI_API_KEY=your-key-here-without-quotes
GITHUB_ACCESS_TOKEN=your-personal-access-token-no-quotes
CACHE_TYPE=redis # or momento
REDIS_PASSWORD=your-redis-password-no-quotes
REDIS_HOST=your-redis-hostname-no-quotes
REDIS_PORT=your-redis-port-no-quotes
MOMENTO_AUTH_TOKEN=your-momento-token-no-quotes # instead of REDIS* variables
QDRANT_API_KEY=your-key
QDRANT_API_URL=https://your.url.here:6333 # note port number must be included
GOOGLE_API_KEY=your-key
GOOGLE_CSE_ID=your-cse-id
When using Azure OpenAI, additional environment variables are required in the
.env
file.
This page Microsoft Azure OpenAI
provides more information, and you can set each environment variable as follows:
AZURE_OPENAI_API_KEY
, from the value of API_KEY
AZURE_OPENAI_API_BASE
from the value of ENDPOINT
, typically looks like https://your.domain.azure.com
.AZURE_OPENAI_API_VERSION
, you can use the default value in .env-template
, and latest version can be found hereAZURE_OPENAI_DEPLOYMENT_NAME
is the name of the deployed model, which is defined by the user during the model setupAZURE_OPENAI_MODEL_NAME
Azure OpenAI allows specific model names when you select the model for your deployment. You need to put precisly the exact model name that was selected. For example, GPT-4 (should be gpt-4-32k
or gpt-4
).AZURE_OPENAI_MODEL_VERSION
is required if AZURE_OPENAI_MODEL_NAME = gpt=4
, which will assist Langroid to determine the cost of the modelWe provide a containerized version of the langroid-examples
repository via this Docker Image.
All you need to do is set up environment variables in the .env
file.
Please follow these steps to setup the container:
# get the .env file template from `langroid` repo
wget -O .env https://raw.githubusercontent.com/langroid/langroid/main/.env-template
# Edit the .env file with your favorite editor (here nano), and remove any un-used settings. E.g. there are "dummy" values like "your-redis-port" etc -- if you are not using them, you MUST remove them.
nano .env
# launch the container
docker run -it --rm -v ./.env:/langroid/.env langroid/langroid
# Use this command to run any of the scripts in the `examples` directory
python examples/<Path/To/Example.py>
These are quick teasers to give a glimpse of what you can do with Langroid and how your code would look.
:warning: The code snippets below are intended to give a flavor of the code
and they are not complete runnable examples! For that we encourage you to
consult the langroid-examples
repository.
:information_source: The various LLM prompts and instructions in Langroid have been tested to work well with GPT-4 (and to some extent GPT-4o). Switching to other LLMs (local/open and proprietary) is easy (see guides mentioned above), and may suffice for some applications, but in general you may see inferior results unless you adjust the prompts and/or the multi-agent setup.
:book: Also see the
Getting Started Guide
for a detailed tutorial.
Click to expand any of the code examples below. All of these can be run in a Colab notebook:
import langroid.language_models as lm
mdl = lm.OpenAIGPT()
messages = [
lm.LLMMessage(content="You are a helpful assistant", role=lm.Role.SYSTEM),
lm.LLMMessage(content="What is the capital of Ontario?", role=lm.Role.USER),
]
response = mdl.chat(messages, max_tokens=200)
print(response.message)
cfg = lm.OpenAIGPTConfig(
chat_model="local/localhost:8000",
chat_context_length=4096
)
mdl = lm.OpenAIGPT(cfg)
# now interact with it as above, or create an Agent + Task as shown below.
If the model is supported by liteLLM
,
then no need to launch the proxy server.
Just set the chat_model
param above to litellm/[provider]/[model]
, e.g.
litellm/anthropic/claude-instant-1
and use the config object as above.
Note that to use litellm
you need to install langroid with the litellm
extra:
poetry install -E litellm
or pip install langroid[litellm]
.
For remote models, you will typically need to set API Keys etc as environment variables.
You can set those based on the LiteLLM docs.
If any required environment variables are missing, Langroid gives a helpful error
message indicating which ones are needed.
Note that to use langroid
with litellm
you need to install the litellm
extra, i.e. either pip install langroid[litellm]
in your virtual env,
or if you are developing within the langroid
repo,
poetry install -E litellm
.
pip install langroid[litellm]
import langroid as lr
agent = lr.ChatAgent()
# get response from agent's LLM, and put this in an interactive loop...
# answer = agent.llm_response("What is the capital of Ontario?")
# ... OR instead, set up a task (which has a built-in loop) and run it
task = lr.Task(agent, name="Bot")
task.run() # ... a loop seeking response from LLM or User at each turn
A toy numbers game, where when given a number n
:
repeater_task
's LLM simply returns n
,even_task
's LLM returns n/2
if n
is even, else says "DO-NOT-KNOW"odd_task
's LLM returns 3*n+1
if n
is odd, else says "DO-NOT-KNOW"Each of these Task
s automatically configures a default ChatAgent
.
import langroid as lr
from langroid.utils.constants import NO_ANSWER
repeater_task = lr.Task(
name = "Repeater",
system_message="""
Your job is to repeat whatever number you receive.
""",
llm_delegate=True, # LLM takes charge of task
single_round=False,
)
even_task = lr.Task(
name = "EvenHandler",
system_message=f"""
You will be given a number.
If it is even, divide by 2 and say the result, nothing else.
If it is odd, say {NO_ANSWER}
""",
single_round=True, # task done after 1 step() with valid response
)
odd_task = lr.Task(
name = "OddHandler",
system_message=f"""
You will be given a number n.
If it is odd, return (n*3+1), say nothing else.
If it is even, say {NO_ANSWER}
""",
single_round=True, # task done after 1 step() with valid response
)
Then add the even_task
and odd_task
as sub-tasks of repeater_task
,
and run the repeater_task
, kicking it off with a number as input:
repeater_task.add_sub_task([even_task, odd_task])
repeater_task.run("3")
Langroid leverages Pydantic to support OpenAI's Function-calling API as well as its own native tools. The benefits are that you don't have to write any JSON to specify the schema, and also if the LLM hallucinates a malformed tool syntax, Langroid sends the Pydantic validation error (suitably sanitized) to the LLM so it can fix it!
Simple example: Say the agent has a secret list of numbers,
and we want the LLM to find the smallest number in the list.
We want to give the LLM a probe
tool/function which takes a
single number n
as argument. The tool handler method in the agent
returns how many numbers in its list are at most n
.
First define the tool using Langroid's ToolMessage
class:
import langroid as lr
class ProbeTool(lr.agent.ToolMessage):
request: str = "probe" # specifies which agent method handles this tool
purpose: str = """
To find how many numbers in my list are less than or equal to
the <number> you specify.
""" # description used to instruct the LLM on when/how to use the tool
number: int # required argument to the tool
Then define a SpyGameAgent
as a subclass of ChatAgent
,
with a method probe
that handles this tool:
class SpyGameAgent(lr.ChatAgent):
def __init__(self, config: lr.ChatAgentConfig):
super().__init__(config)
self.numbers = [3, 4, 8, 11, 15, 25, 40, 80, 90]
def probe(self, msg: ProbeTool) -> str:
# return how many numbers in self.numbers are less or equal to msg.number
return str(len([n for n in self.numbers if n <= msg.number]))
We then instantiate the agent and enable it to use and respond to the tool:
spy_game_agent = SpyGameAgent(
lr.ChatAgentConfig(
name="Spy",
vecdb=None,
use_tools=False, # don't use Langroid native tool
use_functions_api=True, # use OpenAI function-call API
)
)
spy_game_agent.enable_message(ProbeTool)
For a full working example see the
chat-agent-tool.py
script in the langroid-examples
repo.
Suppose you want an agent to extract the key terms of a lease, from a lease document, as a nested JSON structure. First define the desired structure via Pydantic models:
from pydantic import BaseModel
class LeasePeriod(BaseModel):
start_date: str
end_date: str
class LeaseFinancials(BaseModel):
monthly_rent: str
deposit: str
class Lease(BaseModel):
period: LeasePeriod
financials: LeaseFinancials
address: str
Then define the LeaseMessage
tool as a subclass of Langroid's ToolMessage
.
Note the tool has a required argument terms
of type Lease
:
import langroid as lr
class LeaseMessage(lr.agent.ToolMessage):
request: str = "lease_info"
purpose: str = """
Collect information about a Commercial Lease.
"""
terms: Lease
Then define a LeaseExtractorAgent
with a method lease_info
that handles this tool,
instantiate the agent, and enable it to use and respond to this tool:
class LeaseExtractorAgent(lr.ChatAgent):
def lease_info(self, message: LeaseMessage) -> str:
print(
f"""
DONE! Successfully extracted Lease Info:
{message.terms}
"""
)
return json.dumps(message.terms.dict())
lease_extractor_agent = LeaseExtractorAgent()
lease_extractor_agent.enable_message(LeaseMessage)
See the chat_multi_extract.py
script in the langroid-examples
repo for a full working example.
Langroid provides a specialized agent class DocChatAgent
for this purpose.
It incorporates document sharding, embedding, storage in a vector-DB,
and retrieval-augmented query-answer generation.
Using this class to chat with a collection of documents is easy.
First create a DocChatAgentConfig
instance, with a
doc_paths
field that specifies the documents to chat with.
import langroid as lr
from langroid.agent.special import DocChatAgentConfig, DocChatAgent
config = DocChatAgentConfig(
doc_paths = [
"https://en.wikipedia.org/wiki/Language_model",
"https://en.wikipedia.org/wiki/N-gram_language_model",
"/path/to/my/notes-on-language-models.txt",
],
vecdb=lr.vector_store.QdrantDBConfig(),
)
Then instantiate the DocChatAgent
(this ingests the docs into the vector-store):
agent = DocChatAgent(config)
Then we can either ask the agent one-off questions,
agent.llm_response("What is a language model?")
or wrap it in a Task
and run an interactive loop with the user:
task = lr.Task(agent)
task.run()
See full working scripts in the
docqa
folder of the langroid-examples
repo.
Using Langroid you can set up a TableChatAgent
with a dataset (file path, URL or dataframe),
and query it. The Agent's LLM generates Pandas code to answer the query,
via function-calling (or tool/plugin), and the Agent's function-handling method
executes the code and returns the answer.
Here is how you can do this:
import langroid as lr
from langroid.agent.special import TableChatAgent, TableChatAgentConfig
Set up a TableChatAgent
for a data file, URL or dataframe
(Ensure the data table has a header row; the delimiter/separator is auto-detected):
dataset = "https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv"
# or dataset = "/path/to/my/data.csv"
# or dataset = pd.read_csv("/path/to/my/data.csv")
agent = TableChatAgent(
config=TableChatAgentConfig(
data=dataset,
)
)
Set up a task, and ask one-off questions like this:
task = lr.Task(
agent,
name = "DataAssistant",
default_human_response="", # to avoid waiting for user input
)
result = task.run(
"What is the average alcohol content of wines with a quality rating above 7?",
turns=2 # return after user question, LLM fun-call/tool response, Agent code-exec result
)
print(result.content)
Or alternatively, set up a task and run it in an interactive loop with the user:
task = lr.Task(agent, name="DataAssistant")
task.run()
For a full working example see the
table_chat.py
script in the langroid-examples
repo.
If you like this project, please give it a star ā and š¢ spread the word in your network or social media:
Your support will help build Langroid's momentum and community.
FAQs
Harness LLMs with Multi-Agent Programming
We found that langroid demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Ā It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.