ai-tool-agent(WIP)
AI Agent Script is a framework for defining AI Agents, their properties, and behaviors for interactive conversations. This document provides an overview of the script structure, functions, and event handling mechanisms used in AI Agent Scripts.
The AI Tool Agent employs Large Language Models (LLMs) to execute targeted tasks.
The base class manages all agents and abstracts agent functionality.
AIScript
The Lightweight Intelligent Agent Script Engine
AIScript executes workflows based on script parameters and instructions in a YAML document, involving calls to large model tools, template replacements, and result pipeline processing.
exec(data?: any)
: Returns execution result.run(data?: any)
: Returns the runtime after execution, with result at runtime.result
.
The Lightweight Intelligent Agent Script extension name is .ai.yaml
or .ai.yml
.
Front-matter Initialization
You can initialize configuration parameters such as model parameters (parameters
) and prompt variables (prompt
).
---
autoRunLLMIfPromptAvailable: true
forceJson: null
disableLlmRequest: false
completion_delimiter: ''
prompt:
add_generation_prompt: true
messages:
- role: system
content: Carefully Think about the intent of following The CONVERSATION user provided. Output the json object with the Intent Category and Reason.
- role: user
content: |-
The CONVERSATION:
{{ conversation }}
input:
conversation: "messages[1].content"
output:
type: "object"
properties:
intent:
type: "string"
reason:
type: "string"
required: ["intent", "reason"]
parameters:
continueOnLengthLimit: true
timeout: 120000
max_tokens: 5
temperature: 0.7
top_k: 40
top_p: 0.95
min_p: 0.05
seed: 4294967295
tfs_z: 1
typical_p: 1
repeat_last_n: 64
repeat_penalty: 1
presence_penalty: 0
frequency_penalty: 0
response_format:
type: json_object
---
Adding Prompt Messages
Each line, either a string or a single-value object, adds a prompt message.
If the line is a string, it represents a user (human) message; if it's an object, it specifies a role with [role]: message
.
Default as prompt messages, which can be strings representing user (human) messages or objects indicating [role]: message
.
- "hi, my assistant."
- assistant: "hi, {{user}}"
Prompt messages can also be defined as templates, e.g., {{user}}
. Template data is specified in prompt parameters, either with $prompt or directly in the FRONT-MATTER:
---
prompt:
add_generation_prompt: true
user: Mike
---
- "hi, my assistant."
- $prompt:
user: Mike
Functions
Follows the array order for execution.
Defining Functions
Define functions using the !fn
custom tag.
!fn |-
function func1 ({arg1, arg2}) {
}
!fn |-
func1 ({arg1, arg2}) {
}
async require(moduleFilename)
can be used in functions.
In functions, this
can access the current script's runtime and its methods.
Defining Template Functions
Define template functions using the !fn#
custom tag. These can be used within Jinja templates.
---
content:
a: 1
b: 2
---
!fn# |-
function toString(value) {
return JSON.stringify(value)
}
$format: "{{toString(content)}}"
Formatting string
The $format
function uses Jinja2 templates to format strings. This is particularly useful when you need to generate dynamic content based on variables.
---
content: hello world
---
$format: "{{content}}"
Executing External AI Agent Script with $exec
The $exec
function allows you to call external scripts and pass arguments to them.
$exec:
id: 'script id'
filename: 'script filename'
args:
Variable Operations
Set and get variables using $set
and $get
.
$set:
testVar: 124
var2: !fn (key) { return key + ' hi' }
$get:
- testVar
- var2
Expressions
Use ?=<expression>
for inline expressions.
- $echo: ?=23+5
Events Handling
Handle events using $on
, $once
, $emit
, and $off
.
$on
: Registers an event listener that will be called every time the event is emitted.$once
: Registers an event listener that will be called only the first time the event is emitted.$emit
: Emits an event, triggering all registered listeners for that event.$off
: Removes an event listener.
$on
and $once
Event Listening Functions
Arguments:
- event: Event name
- callback: Callback function or expression
Callback function:
!fn |-
onTest (event, arg1) { return {...arg1, event: event.type}}
$on:
event: test
callback: onTest
$once:
event: test
callback: !fn |-
(event, arg1) { return {...arg1, event: event.type}}
$emit:
event: test
args:
a: 1
b: 2
$off:
event: test
callback: onTest
Known Events
beforeCall
: Triggered before a function call.
- Callback:
(event, name, params, fn) => void|params
- Return value modifies parameters.
afterCall
: Triggered before returning the result of a function call.
- Callback:
(event, name, params, result, fn) => void|result
- Return value modifies the result.
llmParams
: Triggered before before the LLM is called and can be used to modify the parameters passed to the LLM.
- Callback:
(event, params: {value: AIMessage[], options?: any, model?: string, count?: number}) => void|result<{value: AIMessage[], options?: any, model?: string, count?: number}>
value
: The messages to be sent to the LLM.options
: The options passed to the LLM.model
: The LLM name to be used.count
: the retry count if any.
llmBefore
: Triggered before before the LLM is called and can not modify the parameters, only used as notification.
- Callback:
(event, params: any) => void
llm
: Triggered before the LLM returns results, used to modify LLM results.
- Callback:
(event, result: string) => void|result<string>
llmStream
: Triggered when the LLM returns results in a stream.
- Callback:
(event, chunk: AIResult, content: string, retryCount: number) => void
llmRequest
: Triggered when an LLM result is needed, used to call the LLM and get results.
- Callback:
(event, messages: AIChatMessage[], options?) => void|result<string>
- Can be disabled with
disableLlmRequest: true
.
ready
: Triggered when the script interaction is ready.
- Callback:
(event, isReady: boolean) => void
load-chats
: Triggered when loading chat history.
- Callback:
(event, filename: string) => AIChatMessage[]|void
save-chats
: Triggered when saving chat history.
- Callback:
(event, messages: AIChatMessage[], filename?: string) => void
Conditional Statements
Use $if
for conditional logic execution. You can define then
and else
blocks to specify actions based on the evaluation of the condition.
$set:
a: 1
- $if: "a == 1"
then:
$echo: Ok
else:
$echo: Not OK
!fn |-
isOk(ok) {return ok}
- $if:
$isOK: true
then:
$echo: Ok
else:
$echo: Not OK
$Prompt
Use $prompt
to define prompt parameters for template usage or define them in the FRONT-MATTER
.
- $prompt:
add_generation_prompt: true
user: Mike
Model $Parameters
Set model parameters with $parameters
or define them in the FRONT-MATTER
.
---
parameters:
max_tokens: 512
temperature: 0.01
---
- $parameters:
temperature: 0.01
Tools
Invoke registered tools with the $tool
tag.
LLM (Large Language Model) Tool
$AI
is a quick shortcut for directly calling the large model tool.
By default, it appends the response to prompt.messages
, unless shouldAppendResponse: false
is set.
$AI:
max_tokens: 512
temperature: 0.7
pushMessage: true
shouldAppendResponse: null
block-llm-evt: true
block-llmParams-evt: true
block-llmBefore-evt: true
block-all-evt: true
$tool:
name: llm
...
llm: $tool
|- max_tokens: 512
|- temperature: 0.7
- llm: $tool
max_tokens: 512
temperature: 0.7
Model parameters can also be configured in the front-matter
:
---
output:
type: "object"
properties:
intent:
type: "string"
categories:
type: "array"
items:
type: "string"
reason:
type: "string"
required: ["intent", "categories", "reason"]
parameters:
max_tokens: 512
continueOnLengthLimit: true
maxRetry: 7
stream: true
timeout: 30000
response_format:
type: json_object
minTailRepeatCount: 7
llmStream: true
---
- $AI
Supports streaming output. When llmStream
(or stream: true
in call parameters) is enabled, the model returns a streaming response, triggering the llm-stream
event.
The event handler receives (event, part: AIResult, content: string)
as parameters, where part
is the current model response and content
is the accumulated content
from the model response.
If prompt.messages
exist in the initial data and the script doesn't manually call $AI
, it will automatically call at the end. This can be disabled by setting autoRunLLMIfPromptAvailable: false
.
If response_format.type
is "response_format
" and output
exists, the returned result will be the JSON Object content from output
, not the model's direct response. You can force disabling this with forceJson: false
.
New feature: If the last message is incomplete, add_generation_prompt
isn't set, and there's no response template replacement in the last message, the model's response won't append a new assistant message but complete the last one. This can be disabled by setting shouldAppendResponse: true
.
If no output variable is defined, the default output is "RESPONSE
" in prompt.
You can define tool output replacements within messages:
- "Greet Jacky:[[GREETINGS]]\n"
- $AI:
stop_word: '.'
aborter: ?= new AbortController()
...
This defines GREETINGS
in prompt
, and the tool's result is placed there. With logLevel set to info, message results are displayed:
Greet Jacky: GREETINGS Hi there Jacky! It's nice to meet you.
🚀 [info]: { role: "user", content: "a simple joke without new line: [[JOKE]] Haha." }
🚀 [info]: a simple joke without new line: Why don't scientists trust atoms?
Because they make up everything. [[JOKE]] Haha. { role: "user" }
Sometimes the response doesn't follow instructions, requiring result preprocessing. For instance, replacing \n with ' ':
Processable via events; the llm tool triggers an llm event upon completion, allowing result modification:
---
prompt:
add_generation_prompt: true
parameters:
max_tokens: 5
continueOnLengthLimit: true
---
!fn |-
trimLLMResult(event, result) {
return result.content.replace(/[\\n\\r]+/g, ' ')
}
"a simple joke without new line: [[JOKE]] Haha."
$on:
event: llm
callback: $trimLLMResult
$tool: llm
Pipelines
$pipe
passes the previous result to the next step, supporting shorthand $|func
.
- toolId: $tool
- |
- $func1
- $|func1
- $|print
- llm: $tool
- $|func1
- $|print
ID
It consists of two parts: <name>[|<additional name|default>]
, the second part additional name
is optional, generally related to the large language model, and default
means not matching the large model when it is used.
AIScriptServer
The AIScriptServer provides methods to load and manage scripts and chat histories.
AiScriptServer.load()
: Load and compile the source script file.- Automatically load (
$loadChats(filename?: string)
) and save ($saveChats(filename?: string)
) chat history if chatsDir
and id
parameters are set.
AI Character Agent Script Type
An AI Character Agent Script defines AI characters, their properties, and behaviors. Set type
to char
to indicate an AI character script.
---
type: char
---
Characters can store both character and user information. Use isBot to differentiate between users and AI characters.