Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
pip install agentjo
instead of pip install taskgen-ai
from agentjo import *
instead of from taskgen import *
Happy to share that the task-based agentic framework I have been working on - TaskGen - is largely complete!
Noteable features include:
strict_json
addedI am quite sure that this is the best open-source agentic framework for task-based execution out there! Existing frameworks like AutoGen rely too much on conversational text which is lengthy and not targeted. TaskGen uses StrictJSON (JSON parser with type checking and more!) as the core, and agents are efficient and are able to do Chain of Thought natively using JSON keys and descriptions as a guide.
I can't wait to see what this new framework can do for you!
pip install taskgen-ai
taskgen
and use them!Create an agent by entering your agent's name and description
Agents are task-based, so they will help generate subtasks to fulfil your main task
Agents are made to be non-verbose, so they will just focus only on task instruction (Much more efficient compared to conversational-based agentic frameworks like AutoGen)
Agent's interactions will be stored into subtasks_completed
by default, which will serve as a memory buffer for future interactions
Inputs for Agent:
Agent Internal Parameters:
Task Running
Give User Output
query
to the user. If query
is not given, then it replies based on the current task the agent is doing. If stateful
is True, saves this query and reply into subtasks_completed
Check status of Agent:
my_agent = Agent('Helpful assistant', 'You are a generalist agent', llm = llm)
output = my_agent.run('Give me 5 words rhyming with cool, and make a 4-sentence poem using them')
Subtask identified: Find 5 words that rhyme with 'cool'
Getting LLM to perform the following task: Find 5 words that rhyme with 'cool'
pool, rule, fool, tool, school
Subtask identified: Compose a 4-sentence poem using the words 'pool', 'rule', 'fool', 'tool', and 'school'
Getting LLM to perform the following task: Compose a 4-sentence poem using the words 'pool', 'rule', 'fool', 'tool', and 'school'
In the school, the golden rule is to never be a fool. Use your mind as a tool, and always follow the pool.
Task completed successfully!
my_agent.status()
Agent Name: Helpful assistant
Agent Description: You are a generalist agent
Available Functions: ['use_llm', 'end_task']
Task: Give me 5 words rhyming with cool, and make a 4-sentence poem using them
Subtasks Completed:
Subtask: Find 5 words that rhyme with 'cool'
pool, rule, fool, tool, school
Subtask: Compose a 4-sentence poem using the words 'pool', 'rule', 'fool', 'tool', and 'school'
In the school, the golden rule is to never be a fool. Use your mind as a tool, and always follow the pool.
Is Task Completed: True
output = my_agent.reply_user()
Here are 5 words that rhyme with "cool": pool, rule, fool, tool, school. Here is a 4-sentence poem using these words: "In the school, the golden rule is to never be a fool. Use your mind as a tool, and always follow the pool."
Function
(see Tutorial 0), or just any Python function with input and output types defined in the signature and with a docstringassign_functions
to assign a list of functions of class Function
, or general Python functions (which will be converted to AsyncFunction)run()
# This is an example of an LLM-based function (see Tutorial 0)
sentence_style = Function(fn_description = 'Output a sentence with words <var1> and <var2> in the style of <var3>',
output_format = {'output': 'sentence'},
fn_name = 'sentence_with_objects_entities_emotion',
llm = llm)
# This is an example of an external user-defined function (see Tutorial 0)
def binary_to_decimal(binary_number: str) -> int:
'''Converts binary_number to integer of base 10'''
return int(str(binary_number), 2)
# Initialise your Agent
my_agent = Agent('Helpful assistant', 'You are a generalist agent')
# Assign the functions
my_agent.assign_functions([sentence_style, binary_to_decimal])
# Run the Agent
output = my_agent.run('First convert binary string 1001 to a number, then generate me a happy sentence with that number and a ball')
Subtask identified: Convert the binary number 1001 to decimal
Calling function binary_to_decimal with parameters {'x': '1001'}
{'output1': 9}
Subtask identified: Generate a happy sentence with the decimal number and a ball
Calling function sentence_with_objects_entities_emotion with parameters {'obj': '9', 'entity': 'ball', 'emotion': 'happy'}
{'output': 'I am so happy with my 9 balls.'}
Task completed successfully!
Approach 1: Automatically Run your agent using run()
Approach 2: Manually select and use functions for your task
function_name
with function_params
. stateful
controls whether the output of this function will be saved to subtasks_completed
under the key of subtask
Assign/Remove Functions:
Show Functions:
AsyncAgent
works the same way as Agent
, only much faster due to parallelisation of tasksAsyncFunction
, or general Python functions (which will be converted to AsyncFunction)AsyncFunction
, you should define the fn_name as well if it is not an External Functionawait
keyword to any function that you run with the AsyncAgent
async def llm_async(system_prompt: str, user_prompt: str):
''' Here, we use OpenAI for illustration, you can change it to your own LLM '''
# ensure your LLM imports are all within this function
from openai import AsyncOpenAI
# define your own LLM here
client = AsyncOpenAI()
response = await client.chat.completions.create(
model='gpt-4o-mini',
temperature = 0,
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
)
return response.choices[0].message.content
# This is an example of an LLM-based function (see Tutorial 0)
sentence_style = AsyncFunction(fn_description = 'Output a sentence with words <var1> and <var2> in the style of <var3>',
output_format = {'output': 'sentence'},
fn_name = 'sentence_with_objects_entities_emotion', # you must define fn_name for LLM-based functions
llm = llm_async) # use an async LLM function
# This is an example of an external user-defined function (see Tutorial 0)
def binary_to_decimal(binary_number: str) -> int:
'''Converts binary_number to integer of base 10'''
return int(str(binary_number), 2)
# Initialise your Agent
my_agent = AsyncAgent('Helpful assistant', 'You are a generalist agent')
# Assign the functions
my_agent.assign_functions([sentence_style, binary_to_decimal])
# Run the Agent
output = await my_agent.run('Generate me a happy sentence with a number and a ball. The number is b1001 converted to decimal')
"Because text is not enough" - Anonymous
shared_variables
is a dictionary, that is initialised in Agent (default empty dictionary), and can be referenced by any function of the agent (including Inner Agents and their functions)subtasks_completed
directlyshared_variables
as the first input variable, from which you can access and modify shared_variables
directlyshared_variables['agent']
, so you can change the agent's internal parameters via shared_variables
shared_variables
, the default return value will be {'Status': 'Completed'}
shared_variables
# Use shared_variables as input to your external function to access and modify the shared variables
def generate_quotes(shared_variables, number_of_quotes: int, category: str):
''' Generates number_of_quotes quotes about category '''
# Retrieve from shared variables
my_quote_list = shared_variables['Quote List']
# Generate the quotes
res = strict_json(system_prompt = f'''Generate {number_of_quotes} sentences about {category}.
Do them in the format "<Quote> - <Person>", e.g. "The way to get started is to quit talking and begin doing. - Walt Disney"
Ensure your quotes contain only ' within the quote, and are enclosed by " ''',
user_prompt = '',
output_format = {'Quote List': f'list of {number_of_quotes} quotes, type: List[str]'},
llm = llm)
my_quote_list.extend([f'Category: {category}. '+ x for x in res['Quote List']])
# Store back to shared variables
shared_variables['Quote List'] = my_quote_list
Global Context
is a very powerful feature in TaskGen, as it allows the Agent to be updated with the latest environmental state before every decision it makes
It also allows for learnings in shared_variables
to be carried across tasks, making the Agent teachable and learn through experiences
A recommended practice is to always store the learnings of the Agent during the External Function call, and reset the Agent after each task, so that subtasks_completed
will be as short as possible to avoid confusion to the Agent
There are two ways to use Global Context
, and both can be used concurrently:
global_context
shared_variables
without any modification to it, then you can use global_context
global_context
is a string with <shared_variables_name>
enclosed with <>
. These <> will be replaced with the actual variable in shared_variables
get_global_context
get_global_context
is a function that takes in the agent's internal parameters (self) and outputs a string to the LLM to append to the prompts of any LLM-based calls internally, e.g. get_next_subtask
, use_llm
, reply_to_user
shared_variables
as required and configure a global prompt to the agentglobal_context
: Inventory ManagerGlobal Context
to keep track of inventory stateadd_item_to_inventory
and remove_item_from_inventory
to modify the shared_variable
named Inventory
Global Context
def add_item_to_inventory(shared_variables, item: str) -> str:
''' Adds item to inventory, and returns outcome of action '''
shared_variables['Inventory'].append(item)
return f'{item} successfully added to Inventory'
def remove_item_from_inventory(shared_variables, item: str) -> str:
''' Removes item from inventory and returns outcome of action '''
if item in shared_variables['Inventory']:
shared_variables['Inventory'].remove(item)
return f'{item} successfully removed from Inventory'
else:
return f'{item} not found in Inventory, unable to remove'
agent = Agent('Inventory Manager',
'Adds and removes items in Inventory. Only able to remove items if present in Inventory',
shared_variables = {'Inventory': []},
global_context = 'Inventory: <Inventory>', # Add in Global Context here with shared_variables Inventory
llm = llm).assign_functions([add_item_to_inventory, remove_item_from_inventory])
gpt-3.5-turbo
is not that great with mathematical functions for Agents. Use gpt-4o-mini
or better for more consistent resultsgpt-3.5-turbo
is not that great with Memory (Tutorial 3). Use gpt-4o-mini
or better for more consistent resultsFAQs
A Task-based agentic framework building on StrictJSON outputs by LLM agents
We found that taskgen-ai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.