Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@isdk/ai-tool-agent

Package Overview
Dependencies
Maintainers
0
Versions
37
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@isdk/ai-tool-agent

AI Agent Script is a framework for defining AI Agents, their properties, and behaviors for interactive conversations. This document provides an overview of the script structure, functions, and event handling mechanisms used in AI Agent Scripts.

  • 0.4.8
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
19
decreased by-77.91%
Maintainers
0
Weekly downloads
 
Created
Source

ai-tool-agent(WIP)

AI Agent Script is a framework for defining AI Agents, their properties, and behaviors for interactive conversations. This document provides an overview of the script structure, functions, and event handling mechanisms used in AI Agent Scripts.

The AI Tool Agent employs Large Language Models (LLMs) to execute targeted tasks.

The base class manages all agents and abstracts agent functionality.

AIScript

The Lightweight Intelligent Agent Script Engine

AIScript executes workflows based on script parameters and instructions in a YAML document, involving calls to large model tools, template replacements, and result pipeline processing.

  • exec(data?: any): Returns execution result.
  • run(data?: any): Returns the runtime after execution, with result at runtime.result.

The Lightweight Intelligent Agent Script extension name is .ai.yaml or .ai.yml.

Front-matter Initialization

You can initialize configuration parameters such as model parameters (parameters) and prompt variables (prompt).

---
autoRunLLMIfPromptAvailable: true # Default true, whether to automatically call the large model when exiting the script if there is a prompt message and the large model has never been called
forceJson: null                   # Default `undefined` When `undefined`|`null`, it will automatically judge according to output and response_format.type
disableLlmRequest: false          # Default false, whether to disable the `llm-request` event
completion_delimiter: ''          # Optional parameter, used to indicate the end of output in the prompt word. If used, the end marker will be automatically added to stop_words. Default None

prompt:
  add_generation_prompt: true
  messages:
    - role: system
      content: Carefully Think about the intent of following The CONVERSATION user provided. Output the json object with the Intent Category and Reason.
    - role: user
      content: |-
        The CONVERSATION:
        {{ conversation }}
input:
  conversation: "messages[1].content"
output:
  type: "object"
  properties:
    intent:
      type: "string"
    reason:
      type: "string"
  required: ["intent", "reason"]
parameters:
  continueOnLengthLimit: true
  timeout: 120000      # LLM max timeout, defaults to 120000ms(2min)
  max_tokens: 5        # Indicates the model will generate at most 5 tokens as output.
  temperature: 0.7     # The higher the value, the more creative and diverse the generated text will be.
  top_k: 40            # Selects the top k most likely words from a set, then randomly picks one from these k words. Default is 40.
  top_p: 0.95          # Sets a probability threshold for the model to select only from the most probable vocabulary until the cumulative probability reaches the set threshold. This ensures the generated text is both reasonable and varied.
  min_p: 0.05          # Sets a minimum probability threshold to ensure the model considers words with probabilities no lower than this threshold, maintaining the quality and diversity of the generated text.
  seed: 4294967295     # A seed to ensure the model generates the same results each time the same seed is used, facilitating reproducibility and debugging.
  tfs_z: 1             # A parameter for a temperature scaling strategy (Temperature-Free Sampling with z) that helps the model focus on high-quality word choices while maintaining some diversity.
  typical_p: 1         # Sets a standard for the model to choose words that are both common and reasonable, ensuring both text quality and diversity.
  repeat_last_n: 64    # Remembers the last 64 tokens to avoid repetition, preventing excessive repetition in the generated text.
  repeat_penalty: 1    # Implements a penalty mechanism to discourage the model from reusing the same words, making the generated text richer and more diverse.
  presence_penalty: 0  # Instructs the model to use a word less if it has already appeared, increasing diversity and reducing repetition.
  frequency_penalty: 0 # Instructs the model to use a word less if it has been used frequently, reducing repetition and increasing the diversity of the generated text.
  response_format:
    type: json_object # Forces text to JSON object if used with output.
---

Adding Prompt Messages

Each line, either a string or a single-value object, adds a prompt message. If the line is a string, it represents a user (human) message; if it's an object, it specifies a role with [role]: message.

Default as prompt messages, which can be strings representing user (human) messages or objects indicating [role]: message.

- "hi, my assistant." # Represents a user message, equivalent to `user: "hi, my assistant."`
- assistant: "hi, {{user}}" # The key 'assistant' denotes the role, and the value is the role's message.

Prompt messages can also be defined as templates, e.g., {{user}}. Template data is specified in prompt parameters, either with $prompt or directly in the FRONT-MATTER:

---
prompt:
  add_generation_prompt: true # Defaults to true, adding an assistant prompt if the last message's role is not `assistant`.
  user: Mike
---
- "hi, my assistant."
- $prompt:
    user: Mike

Functions

Follows the array order for execution.

Defining Functions

Define functions using the !fn custom tag.

!fn |-
  function func1 ({arg1, arg2}) {
  }
#  Function without the `function` keyword:
!fn |-
  func1 ({arg1, arg2}) {
  }

async require(moduleFilename) can be used in functions.

In functions, this can access the current script's runtime and its methods.

Defining Template Functions

Define template functions using the !fn# custom tag. These can be used within Jinja templates.

---
content:
  a: 1
  b: 2
---
!fn# |-
  function toString(value) {
    return JSON.stringify(value)
  }
$format: "{{toString(content)}}"
Formatting string

The $format function uses Jinja2 templates to format strings. This is particularly useful when you need to generate dynamic content based on variables.

---
content: hello world
---
$format: "{{content}}"
Executing External AI Agent Script with $exec

The $exec function allows you to call external scripts and pass arguments to them.

$exec:
  id: 'script id'
  filename: 'script filename'
  args: # pass to Script arguments(data)
Variable Operations

Set and get variables using $set and $get.

$set:
  testVar: 124
  var2: !fn (key) { return key + ' hi' }
$get:
  - testVar
  - var2
Expressions

Use ?=<expression> for inline expressions.

- $echo: ?=23+5
Events Handling

Handle events using $on, $once, $emit, and $off.

  • $on: Registers an event listener that will be called every time the event is emitted.
  • $once: Registers an event listener that will be called only the first time the event is emitted.
  • $emit: Emits an event, triggering all registered listeners for that event.
  • $off: Removes an event listener.
$on and $once Event Listening Functions

Arguments:

  • event: Event name
  • callback: Callback function or expression

Callback function:

!fn |-
  onTest (event, arg1) { return {...arg1, event: event.type}}

$on:
  event: test
  callback: onTest

$once:
  event: test
  callback: !fn |-
    (event, arg1) { return {...arg1, event: event.type}}

$emit:
  event: test
  args:
    a: 1
    b: 2

$off:
  event: test
  callback: onTest
Known Events
  • beforeCall: Triggered before a function call.
    • Callback: (event, name, params, fn) => void|params
    • Return value modifies parameters.
  • afterCall: Triggered before returning the result of a function call.
    • Callback: (event, name, params, result, fn) => void|result
    • Return value modifies the result.
  • llmParams: Triggered before before the LLM is called and can be used to modify the parameters passed to the LLM.
    • Callback: (event, params: {value: AIMessage[], options?: any, model?: string, count?: number}) => void|result<{value: AIMessage[], options?: any, model?: string, count?: number}>
    • value: The messages to be sent to the LLM.
    • options: The options passed to the LLM.
    • model: The LLM name to be used.
    • count: the retry count if any.
  • llmBefore: Triggered before before the LLM is called and can not modify the parameters, only used as notification.
    • Callback: (event, params: any) => void
  • llm: Triggered before the LLM returns results, used to modify LLM results.
    • Callback: (event, result: string) => void|result<string>
  • llmStream: Triggered when the LLM returns results in a stream.
    • Callback: (event, chunk: AIResult, content: string, retryCount: number) => void
  • llmRequest: Triggered when an LLM result is needed, used to call the LLM and get results.
    • Callback: (event, messages: AIChatMessage[], options?) => void|result<string>
    • Can be disabled with disableLlmRequest: true.
  • ready: Triggered when the script interaction is ready.
    • Callback: (event, isReady: boolean) => void
  • load-chats: Triggered when loading chat history.
    • Callback: (event, filename: string) => AIChatMessage[]|void
  • save-chats: Triggered when saving chat history.
    • Callback: (event, messages: AIChatMessage[], filename?: string) => void
Conditional Statements

Use $if for conditional logic execution. You can define then and else blocks to specify actions based on the evaluation of the condition.

$set:
  a: 1
- $if: "a == 1"
  then:
    $echo: Ok
  else:
    $echo: Not OK

# You can also use custom functions for conditions.
!fn |-
  isOk(ok) {return ok}
- $if:
    $isOK: true
  then:
    $echo: Ok
  else:
    $echo: Not OK
$Prompt

Use $prompt to define prompt parameters for template usage or define them in the FRONT-MATTER.

- $prompt:
  add_generation_prompt: true
  user: Mike
Model $Parameters

Set model parameters with $parameters or define them in the FRONT-MATTER.

---
parameters:
  max_tokens: 512
  temperature: 0.01
---
- $parameters:
  temperature: 0.01
Tools

Invoke registered tools with the $tool tag.

LLM (Large Language Model) Tool

$AI is a quick shortcut for directly calling the large model tool. By default, it appends the response to prompt.messages, unless shouldAppendResponse: false is set.

$AI:
  max_tokens: 512
  temperature: 0.7
  pushMessage: true # Defaults to true, appending the model's response to prompt.messages.
  shouldAppendResponse: null # Only relevant when pushMessage is true. Undefined/null appends if matchedResponse, add_generation_prompt, or no lastMsg.content; otherwise, replaces last message content.
  block-llm-evt: true # blocks the event from being emitted.
  block-llmParams-evt: true # blocks the event from being emitted.
  block-llmBefore-evt: true # blocks the event from being emitted.
  block-all-evt: true # blocks all llm events(except llmStream event) from being emitted.
$tool:
  name: llm # A shorthand alias could be: !llm
  ...       # Other named parameters

llm: $tool  # Or define like this?
|- max_tokens: 512    # Without the line indicator '- ', must use '|-' to indicate connection to the previous object.
|- temperature: 0.7

- llm: $tool  # Or define like this?
  max_tokens: 512
  temperature: 0.7

Model parameters can also be configured in the front-matter:

---
output:
  type: "object"
  properties:
    intent:
      type: "string"
    categories:
      type: "array"
      items:
        type: "string"
    reason:
      type: "string"
  required: ["intent", "categories", "reason"]
parameters:
  max_tokens: 512 # Don't make it too big or too small, 512 is suggested. Default is 2048. Controls the maximum tokens returned by the model when the response is infinite.
  continueOnLengthLimit: true
  maxRetry: 7 # Retries the LLM if the response is incomplete due to the max_tokens limit, defaults to 7 retries.
  stream: true # Enables default large model streaming response, overrides llmStream priority.
  timeout: 30000 # Sets response timeout to 30 seconds (in ms). Default is 120 seconds if not set.
  response_format:
    type: json_object
  minTailRepeatCount: 7 # Minimum tail repeat count, default 7. For streaming mode only, stops responding when the model's tail sequence repeats 4 times. Set to 0 to disable detection.
llmStream: true # Enables default large model streaming response
---
- $AI # Executes the large model, optional if messages exist and LLM hasn't been called before script end. Set `autoRunLLMIfPromptAvailable: false` to disable this feature.

Supports streaming output. When llmStream (or stream: true in call parameters) is enabled, the model returns a streaming response, triggering the llm-stream event. The event handler receives (event, part: AIResult, content: string) as parameters, where part is the current model response and content is the accumulated content from the model response.

If prompt.messages exist in the initial data and the script doesn't manually call $AI, it will automatically call at the end. This can be disabled by setting autoRunLLMIfPromptAvailable: false.

If response_format.type is "response_format" and output exists, the returned result will be the JSON Object content from output, not the model's direct response. You can force disabling this with forceJson: false.

New feature: If the last message is incomplete, add_generation_prompt isn't set, and there's no response template replacement in the last message, the model's response won't append a new assistant message but complete the last one. This can be disabled by setting shouldAppendResponse: true.

If no output variable is defined, the default output is "RESPONSE" in prompt.

You can define tool output replacements within messages:

- "Greet Jacky:[[GREETINGS]]\n"
- $AI: # Automatically called if [[]] is detected, unless `autoRunLLMIfPromptAvailable: false`.
  stop_word: '.'
  aborter: ?= new AbortController() # If not set, uses the system's AbortController. Can be stopped anytime with $abort.
  ... # Other named parameters

This defines GREETINGS in prompt, and the tool's result is placed there. With logLevel set to info, message results are displayed:

Greet Jacky: GREETINGS Hi there Jacky! It's nice to meet you.
🚀 [info]: { role: "user", content: "a simple joke without new line: [[JOKE]] Haha." }
🚀 [info]: a simple joke without new line: Why don't scientists trust atoms?

Because they make up everything. [[JOKE]] Haha. { role: "user" }

Sometimes the response doesn't follow instructions, requiring result preprocessing. For instance, replacing \n with ' ':

Processable via events; the llm tool triggers an llm event upon completion, allowing result modification:

---
prompt:
  add_generation_prompt: true
parameters:
  max_tokens: 5
  continueOnLengthLimit: true
---
!fn |-
  trimLLMResult(event, result) {
    return result.content.replace(/[\\n\\r]+/g, ' ')
  }
"a simple joke without new line: [[JOKE]] Haha."
$on:
  event: llm
  callback: $trimLLMResult
$tool: llm
Pipelines

$pipe passes the previous result to the next step, supporting shorthand $|func.

- toolId: $tool
# The previous function's result is passed to 'func1|print'. If a pipe has no arguments, it passes to the next array element. If the next element is an object, it merges.
- |
- $func1
- $|func1
- $|print
- llm: $tool
- $|func1
- $|print

ID

It consists of two parts: <name>[|<additional name|default>], the second part additional name is optional, generally related to the large language model, and default means not matching the large model when it is used.

AIScriptServer

The AIScriptServer provides methods to load and manage scripts and chat histories.

  • AiScriptServer.load(): Load and compile the source script file.
  • Automatically load ($loadChats(filename?: string)) and save ($saveChats(filename?: string)) chat history if chatsDir and id parameters are set.

AI Character Agent Script Type

An AI Character Agent Script defines AI characters, their properties, and behaviors. Set type to char to indicate an AI character script.

---
type: char
---

Characters can store both character and user information. Use isBot to differentiate between users and AI characters.

Keywords

FAQs

Package last updated on 25 Sep 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc