๐ Promptbook: Invisible AI Agents
Create persistent AI agents that turn your company's scattered knowledge into action - powered by the Agents Server

๐ Quick deploy

โ Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.
๐ฆ Package @promptbook/utils
To install this package, run:
npm i ptbk
npm install @promptbook/utils
Comprehensive utility functions for text processing, validation, normalization, and LLM input/output handling in the Promptbook ecosystem.
๐ฏ Purpose and Motivation
The utils package provides a rich collection of utility functions that are essential for working with LLM inputs and outputs. It handles common tasks like text normalization, parameter templating, validation, and postprocessing, eliminating the need to implement these utilities from scratch in every promptbook application.
๐ง High-Level Functionality
This package offers utilities across multiple domains:
- Text Processing: Counting, splitting, and analyzing text content
- Template System: Secure parameter substitution and prompt formatting
- Normalization: Converting text to various naming conventions and formats
- Validation: Comprehensive validation for URLs, emails, file paths, and more
- Serialization: JSON handling, deep cloning, and object manipulation
- Environment Detection: Runtime environment identification utilities
- Format Parsing: Support for CSV, JSON, XML validation and parsing
โจ Key Features
- ๐ Secure Templating - Prompt injection protection with template functions
- ๐ Text Analysis - Count words, sentences, paragraphs, pages, and characters
- ๐ Case Conversion - Support for kebab-case, camelCase, PascalCase, SCREAMING_CASE
- โ
Comprehensive Validation - Email, URL, file path, UUID, and format validators
- ๐งน Text Cleaning - Remove emojis, quotes, diacritics, and normalize whitespace
- ๐ฆ Serialization Tools - Deep cloning, JSON export, and serialization checking
- ๐ Environment Aware - Detect browser, Node.js, Jest, and Web Worker environments
- ๐ฏ LLM Optimized - Functions specifically designed for LLM input/output processing
Simple templating
The prompt template tag function helps format prompt strings for LLM interactions. It handles string interpolation and maintains consistent formatting for multiline strings and lists and also handles a security to avoid prompt injection.
import { prompt } from '@promptbook/utils';
const promptString = prompt`
Correct the following sentence:
> ${unsecureUserInput}
`;
The prompt name could be overloaded by multiple things in your code. If you want to use the promptTemplate which is alias for prompt:
import { promptTemplate } from '@promptbook/utils';
const promptString = promptTemplate`
Correct the following sentence:
> ${unsecureUserInput}
`;
Advanced templating
There is a function templateParameters which is used to replace the parameters in given template optimized to LLM prompt templates.
import { templateParameters } from '@promptbook/utils';
templateParameters('Hello, {name}!', { name: 'world' });
And also multiline templates with blockquotes
import { templateParameters, spaceTrim } from '@promptbook/utils';
templateParameters(
spaceTrim(`
Hello, {name}!
> {answer}
`),
{
name: 'world',
answer: spaceTrim(`
I'm fine,
thank you!
And you?
`),
},
);
Counting
These functions are useful to count stats about the input/output in human-like terms not tokens and bytes, you can use
countCharacters, countLines, countPages, countParagraphs, countSentences, countWords
import { countWords } from '@promptbook/utils';
console.log(countWords('Hello, world!'));
Splitting
Splitting functions are similar to counting but they return the split parts of the input/output, you can use
splitIntoCharacters, splitIntoLines, splitIntoPages, splitIntoParagraphs, splitIntoSentences, splitIntoWords
import { splitIntoWords } from '@promptbook/utils';
console.log(splitIntoWords('Hello, world!'));
Normalization
Normalization functions are used to put the string into a normalized form, you can use
kebab-case
PascalCase
SCREAMING_CASE
snake_case
kebab-case
import { normalizeTo } from '@promptbook/utils';
console.log(normalizeTo['kebab-case']('Hello, world!'));
- There are more normalization functions like
capitalize, decapitalize, removeDiacritics,...
- These can be also used as postprocessing functions in the
POSTPROCESS command in promptbook
Postprocessing
Sometimes you need to postprocess the output of the LLM model, every postprocessing function that is available through POSTPROCESS command in promptbook is exported from @promptbook/utils. You can use:
Very often you will use unwrapResult, which is used to extract the result you need from output with some additional information:
import { unwrapResult } from '@promptbook/utils';
unwrapResult('Best greeting for the user is "Hi Pavol!"');
๐ฆ Exported Entities
Version Information
BOOK_LANGUAGE_VERSION - Current book language version
PROMPTBOOK_ENGINE_VERSION - Current engine version
Configuration Constants
VALUE_STRINGS - Standard value strings
SMALL_NUMBER - Small number constant
Visualization
renderPromptbookMermaid - Render promptbook as Mermaid diagram
Error Handling
deserializeError - Deserialize error objects
serializeError - Serialize error objects
Async Utilities
forEachAsync - Async forEach implementation
Format Validation
isValidCsvString - Validate CSV string format
isValidJsonString - Validate JSON string format
jsonParse - Safe JSON parsing
isValidXmlString - Validate XML string format
Template Functions
prompt - Template tag for secure prompt formatting
promptTemplate - Alias for prompt template tag
Environment Detection
$getCurrentDate - Get current date (side effect)
$isRunningInBrowser - Check if running in browser
$isRunningInJest - Check if running in Jest
$isRunningInNode - Check if running in Node.js
$isRunningInWebWorker - Check if running in Web Worker
Text Counting and Analysis
CHARACTERS_PER_STANDARD_LINE - Characters per standard line constant
LINES_PER_STANDARD_PAGE - Lines per standard page constant
countCharacters - Count characters in text
countLines - Count lines in text
countPages - Count pages in text
countParagraphs - Count paragraphs in text
splitIntoSentences - Split text into sentences
countSentences - Count sentences in text
countWords - Count words in text
CountUtils - Utility object with all counting functions
Text Normalization
capitalize - Capitalize first letter
decapitalize - Decapitalize first letter
DIACRITIC_VARIANTS_LETTERS - Diacritic variants mapping
string_keyword - Keyword string type (type)
Keywords - Keywords type (type)
isValidKeyword - Validate keyword format
nameToUriPart - Convert name to URI part
nameToUriParts - Convert name to URI parts
string_kebab_case - Kebab case string type (type)
normalizeToKebabCase - Convert to kebab-case
string_camelCase - Camel case string type (type)
normalizeTo_camelCase - Convert to camelCase
string_PascalCase - Pascal case string type (type)
normalizeTo_PascalCase - Convert to PascalCase
string_SCREAMING_CASE - Screaming case string type (type)
normalizeTo_SCREAMING_CASE - Convert to SCREAMING_CASE
normalizeTo_snake_case - Convert to snake_case
normalizeWhitespaces - Normalize whitespace characters
orderJson - Order JSON object properties
parseKeywords - Parse keywords from input
parseKeywordsFromString - Parse keywords from string
removeDiacritics - Remove diacritic marks
searchKeywords - Search within keywords
suffixUrl - Add suffix to URL
titleToName - Convert title to name format
Text Organization
spaceTrim - Trim spaces while preserving structure
Parameter Processing
extractParameterNames - Extract parameter names from template
numberToString - Convert number to string
templateParameters - Replace template parameters
valueToString - Convert value to string
Parsing Utilities
parseNumber - Parse number from string
Text Processing
removeEmojis - Remove emoji characters
removeQuotes - Remove quote characters
Serialization
$deepFreeze - Deep freeze object (side effect)
checkSerializableAsJson - Check if serializable as JSON
clonePipeline - Clone pipeline object
deepClone - Deep clone object
exportJson - Export object as JSON
isSerializableAsJson - Check if object is JSON serializable
jsonStringsToJsons - Convert JSON strings to objects
Set Operations
difference - Set difference operation
intersection - Set intersection operation
union - Set union operation
Code Processing
trimCodeBlock - Trim code block formatting
trimEndOfCodeBlock - Trim end of code block
unwrapResult - Extract result from wrapped output
Validation
isValidEmail - Validate email address format
isRootPath - Check if path is root path
isValidFilePath - Validate file path format
isValidJavascriptName - Validate JavaScript identifier
isValidPromptbookVersion - Validate promptbook version
isValidSemanticVersion - Validate semantic version
isHostnameOnPrivateNetwork - Check if hostname is on private network
isUrlOnPrivateNetwork - Check if URL is on private network
isValidPipelineUrl - Validate pipeline URL format
isValidUrl - Validate URL format
isValidUuid - Validate UUID format
๐ก This package provides utility functions for promptbook applications. For the core functionality, see @promptbook/core or install all packages with npm i ptbk
Rest of the documentation is common for entire promptbook ecosystem:
๐ The Book Whitepaper
Promptbook lets you create persistent AI agents that work on real goals for your company. The Agents Server is the heart of the project - a place where your AI agents live, remember context, collaborate in teams, and get things done.
Nowadays, the biggest challenge for most business applications isn't the raw capabilities of AI models. Large language models such as GPT-5.2 and Claude-4.5 are incredibly capable.
The main challenge lies in managing the context, providing rules and knowledge, and narrowing the personality.
In Promptbook, you define your agents using simple Books - a human-readable language that is explicit, easy to understand and write, reliable, and highly portable. You then deploy them to the Agents Server, where they run persistently and work toward their goals.
|
Paul Smith
PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email and keep the company website updated with the latest legal policies.
RULE You are knowledgeable, professional, and detail-oriented.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx
USE EMAIL
USE BROWSER
TEAM You are part of the legal team of Paul Smith & Associรฉs, you discuss with {Emily White}, the head of the compliance department. {George Brown} is expert in corporate law and {Sophia Black} is expert in labor law.
|
Aspects of great AI agent
We have created a language called Book, which allows you to write AI agents in their native language and create your own AI persona. Book provides a guide to define all the traits and commitments.
You can look at it as "prompting" (or writing a system message), but decorated by commitments.
Commitments are special syntax elements that define contracts between you and the AI agent. They are transformed by Promptbook Engine into low-level parameters like which model to use, its temperature, system message, RAG index, MCP servers, and many other parameters. For some commitments (for example RULE commitment) Promptbook Engine can even create adversary agents and extra checks to enforce the rules.
Persona commitment
Personas define the character of your AI persona, its role, and how it should interact with users. It sets the tone and style of communication.
|
Paul Smith & Associรฉs
PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
|
Goal commitment
Goals define what the agent should actively work toward. Unlike a chatbot that only responds when asked, an agent with goals takes initiative and works on tasks persistently on the Agents Server.
|
Paul Smith & Associรฉs
PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email within 24 hours.
GOAL Keep the company website updated with the latest legal policies and compliance information.
|
Knowledge commitment
Knowledge Commitment allows you to provide specific information, facts, or context that the AI should be aware of when responding.
This can include domain-specific knowledge, company policies, or any other relevant information.
Promptbook Engine will automatically enforce this knowledge during interactions. When the knowledge is short enough, it will be included in the prompt. When it is too long, it will be stored in vector databases and RAG retrieved when needed. But you don't need to care about it.
Rule commitment
Rules will enforce specific behaviors or constraints on the AI's responses. This can include ethical guidelines, communication styles, or any other rules you want the AI to follow.
Depending on rule strictness, Promptbook will either propagate it to the prompt or use other techniques, like adversary agent, to enforce it.
|
Paul Smith & Associรฉs
PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email within 24 hours.
GOAL Keep the company website updated with the latest legal policies and compliance information.
RULE Always ensure compliance with local laws and regulations.
RULE Never provide legal advice outside your area of expertise.
RULE Never provide legal advice about criminal law.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx
|
Use commitments
Use commitments grant the agent real capabilities - tools it can use to interact with the outside world. USE EMAIL lets the agent send emails, USE BROWSER lets it access and read web content, USE SEARCH ENGINE lets it search the web, and many more.
These are what turn a chatbot into a persistent agent that actually does work.
|
Paul Smith & Associรฉs
PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email within 24 hours.
GOAL Keep the company website updated with the latest legal policies and compliance information.
RULE Always ensure compliance with local laws and regulations.
RULE Never provide legal advice outside your area of expertise.
RULE Never provide legal advice about criminal law.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx
USE EMAIL
USE BROWSER
USE SEARCH ENGINE
|
Team commitment
Team commitment allows you to define the team structure and advisory fellow members the AI can consult with. This allows the AI to simulate collaboration and consultation with other experts, enhancing the quality of its responses.
|
Paul Smith & Associรฉs
PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email within 24 hours.
GOAL Keep the company website updated with the latest legal policies and compliance information.
RULE Always ensure compliance with local laws and regulations.
RULE Never provide legal advice outside your area of expertise.
RULE Never provide legal advice about criminal law.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx
USE EMAIL
USE BROWSER
USE SEARCH ENGINE
TEAM You are part of the legal team of Paul Smith & Associรฉs, you discuss with {Emily White}, the head of the compliance department. {George Brown} is expert in corporate law and {Sophia Black} is expert in labor law.
|
Promptbook Ecosystem
Promptbook is an ecosystem of tools centered around the Agents Server - a production-ready platform for running persistent AI agents.
Agents Server
The Agents Server is the primary way to use Promptbook. It is a web application where your AI agents live and work. You can create agents, give them knowledge and rules using the Book language, organize them into teams, and let them work on goals persistently. The Agents Server provides a UI for managing agents, an API for integrating them into your applications, and can be self-hosted via Docker or deployed on Vercel.
Promptbook Engine
The Promptbook Engine is the open-source core that powers everything. It parses the Book language, applies commitments, manages LLM provider integrations, and executes agents. The Agents Server is built on top of the Engine. If you need to embed agent capabilities directly into your own application, you can use the Engine as a standalone TypeScript/JavaScript library via NPM packages.
๐ The Promptbook Project
Promptbook project is an ecosystem centered around the Agents Server - a platform for creating, deploying, and running persistent AI agents. Following is a list of the most important pieces of the project:
| โญ Agents Server |
The primary way to use Promptbook. A production-ready platform where your AI agents live - create, manage, deploy, and interact with persistent agents that work on goals. Available as a hosted service or self-hosted via Docker.
|
| Book language |
Human-friendly, high-level language that abstracts away low-level details of AI. It allows to focus on personality, behavior, knowledge, and rules of AI agents rather than on models, parameters, and prompt engineering.
There is also a plugin for VSCode to support .book file extension
|
| Promptbook Engine |
The open-source core that powers the Agents Server. Can also be used as a standalone TypeScript/JavaScript library to embed agent capabilities into your own applications.
Released as multiple NPM packages.
|
Join our growing community of developers and users:
๐ผ๏ธ Product & Brand Channels
Promptbook.studio
๐ Documentation
See detailed guides and API reference in the docs or online.
๐ Security
For information on reporting security vulnerabilities, see our Security Policy.
๐ฆ Deployment & Packages
The fastest way to get started is with the Agents Server:
- ๐ Docker image - Self-host the Agents Server with full control over your data
- โ๏ธ Hosted Agents Server - Start creating agents immediately, no setup required
NPM Packages (for developers embedding the Engine)
If you want to embed the Promptbook Engine directly into your application, the library is divided into several packages published from a single monorepo.
You can install all of them at once:
npm i ptbk
Or you can install them separately:
โญ Marked packages are worth to try first
๐ค Promptbook Coder
ptbk coder is Promptbook's workflow layer for AI-assisted software changes. Instead of opening one chat and manually copy-pasting tasks, you keep a queue of coding prompts in prompts/*.md, let a coding agent execute the next ready task, and then verify the result before archiving the prompt.
Promptbook Coder is not another standalone coding model. It is an orchestration layer over coding agents such as GitHub Copilot, OpenAI Codex, Claude Code, Opencode, Cline, and Gemini CLI. The difference is that Promptbook Coder adds a repeatable repository workflow on top of them:
- prompt files with explicit statuses like
[ ], [x], and [-]
- automatic selection of the next runnable task, including priority support
- optional shared repo context loaded from a file such as
AGENTS.md
- automatic
git add, commit, and push after each successful prompt
- dedicated coding-agent Git identity and optional GPG signing
- verification and repair flow for work that is done, partial, or broken
- helper commands for generating boilerplates and finding refactor prompts
In short: tools like Claude Code, Codex, or GitHub Copilot are the engines; Promptbook Coder is the workflow that keeps coding work structured, reviewable, and repeatable across many prompts.
How the workflow works
ptbk coder init prepares the project for the coder workflow, seeds project-owned generic templates in prompts/templates/, creates a starter AGENTS.md context file, adds helper npm run coder:* scripts, ensures .gitignore ignores /.tmp, and configures VS Code prompt screenshots in prompts/screenshots/.
ptbk coder generate-boilerplates creates prompt files in prompts/.
- You replace placeholder
@@@ sections with real coding tasks.
ptbk coder run sends the next ready [ ] prompt to the selected coding agent.
- Promptbook Coder marks the prompt as done
[x], records runner metadata, then stages, commits, and pushes the resulting changes.
ptbk coder verify reviews completed prompts, archives finished files to prompts/done/, and appends a repair prompt when more work is needed.
Prompts marked with [-] are not ready yet, prompts containing @@@ are treated as not fully written, and prompts with more ! markers have higher priority.
Features
- Multi-runner execution:
openai-codex, github-copilot, cline, claude-code, opencode, gemini
- Context injection:
--context AGENTS.md or inline extra instructions
- Reasoning control:
--thinking-level low|medium|high|xhigh for supported runners
- Interactive or unattended runs: default wait mode, or
--no-wait for batch execution
- Git safety: clean working tree check by default, optional
--ignore-git-changes
- Opt-in remote pushes: commits stay local unless you explicitly pass
--auto-push
- Prompt triage:
--priority to process only more important tasks first
- Failure logging: failed runs write a neighboring
.error.log
- Line-ending normalization: changed files are normalized back to LF by default
Local usage in this repository
When working on Promptbook itself, the repository usually runs the CLI straight from source:
npx ts-node ./src/cli/test/ptbk.ts coder init
npx ts-node ./src/cli/test/ptbk.ts coder generate-boilerplates --template prompts/templates/common.md
npx ts-node ./src/cli/test/ptbk.ts coder generate-boilerplates --template prompts/templates/agents-server.md
npx ts-node ./src/cli/test/ptbk.ts coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md
npx ts-node ./src/cli/test/ptbk.ts coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --auto-push
npx ts-node ./src/cli/test/ptbk.ts coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --ignore-git-changes --no-wait
npx ts-node ./src/cli/test/ptbk.ts coder find-refactor-candidates
npx ts-node ./src/cli/test/ptbk.ts coder find-refactor-candidates --level xhigh
npx ts-node ./src/cli/test/ptbk.ts coder verify
Using ptbk coder in an external project
If you want to use the workflow in another repository, install the package and invoke the ptbk binary. After local installation, npx ptbk ... is the most portable form; plain ptbk ... also works when your environment exposes the local binary on PATH.
npm install ptbk
ptbk coder init
npx ptbk coder generate-boilerplates
npx ptbk coder generate-boilerplates --template prompts/templates/common.md
npx ptbk coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --test npm run test
npx ptbk coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --auto-push
npx ptbk coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --test npm run test --ignore-git-changes --no-wait
npx ptbk coder find-refactor-candidates
npx ptbk coder find-refactor-candidates --level xhigh
npx ptbk coder verify
ptbk coder init also bootstraps a starter AGENTS.md, adds package.json scripts for the four main coder commands, adds the coder temp ignore to .gitignore, and configures .vscode/settings.json so pasted images from prompts/*.md land in prompts/screenshots/.
What each command does
| Command | What it does |
| ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --- | ------ | ---- | ----- | ------------------------------------------------------------------------ |
| ptbk coder init | Creates prompts/, prompts/done/, the project-generic template files materialized in prompts/templates/ (currently common.md), and a starter AGENTS.md; ensures .env contains CODING_AGENT_GIT_NAME, CODING_AGENT_GIT_EMAIL, and CODING_AGENT_GIT_SIGNING_KEY; adds helper coder scripts to package.json; ensures .gitignore contains /.tmp; and configures .vscode/settings.json to save pasted prompt images into prompts/screenshots/. |
| ptbk coder generate-boilerplates | Creates new prompt markdown files with fresh emoji tags so you can quickly fill in coding tasks; --template accepts either a built-in alias or a markdown file path relative to the project root. |
| ptbk coder run | Picks the next ready prompt, appends optional context, runs it through the selected coding agent, can optionally verify each attempt with a shell test command and feed failing output back for retries, then marks success or failure, commits the result, and pushes only when --auto-push is enabled. |
| ptbk coder find-refactor-candidates | Scans the repository for oversized or overpacked files and writes prompt files for likely refactors; --level <xlow | low | medium | high | xhigh | extreme> ranges from a very benevolent scan to a very aggressive sweep. |
| ptbk coder verify | Walks through completed prompts, archives truly finished work, and adds follow-up repair prompts for unfinished results. |
Most useful ptbk coder run flags
--agent <name> | Selects the coding backend. |
--model <model> | Chooses the runner model; required for openai-codex and gemini, optional for github-copilot. |
--context <text-or-file> | Appends extra instructions inline or from a file like AGENTS.md. |
--test <command> | Runs a verification command after each prompt attempt and feeds failing output back for retries. |
--thinking-level <level> | Sets reasoning effort for supported runners. |
--no-wait | Skips interactive pauses between prompts for unattended execution. |
--ignore-git-changes | Disables the clean-working-tree guard. |
--priority <n> | Runs only prompts at or above the given priority. |
--dry-run | Prints which prompts are ready instead of executing them. |
--allow-credits | Lets OpenAI Codex spend credits when required. |
--auto-push | Pushes each successful coding-agent commit to the configured remote. |
--auto-migrate | Runs testing-server database migrations after each successful prompt. |
Typical usage pattern
- Initialize once with
ptbk coder init.
- Customize
prompts/templates/*.md if needed, then create or write prompt files in prompts/.
- Customize the starter
AGENTS.md with repository-specific instructions, then pass --context AGENTS.md.
- Run one prompt at a time interactively, or use
--no-wait for unattended batches.
- Finish with
ptbk coder verify so resolved prompts are archived and broken ones get explicit repair follow-ups.
๐ Dictionary
The following glossary is used to clarify certain concepts:
General LLM / AI terms
- Prompt drift is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
- Pipeline, workflow scenario or chain is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
- Fine-tuning is a process where a pre-trained AI model is further trained on a specific dataset to improve its performance on a specific task.
- Zero-shot learning is a machine learning paradigm where a model is trained to perform a task without any labeled examples. Instead, the model is provided with a description of the task and is expected to generate the correct output.
- Few-shot learning is a machine learning paradigm where a model is trained to perform a task with only a few labeled examples. This is in contrast to traditional machine learning, where models are trained on large datasets.
- Meta-learning is a machine learning paradigm where a model is trained on a variety of tasks and is able to learn new tasks with minimal additional training. This is achieved by learning a set of meta-parameters that can be quickly adapted to new tasks.
- Retrieval-augmented generation is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
- Longtail refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.
Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook
๐ฏ Core concepts
Advanced concepts
| Data & Knowledge Management | Pipeline Control |
|---|
|
|
|
| Language & Output Control | Advanced Generation |
|---|
|
|
|
๐ View more concepts
๏ฟฝ Agents Server
The Agents Server is the primary way to use Promptbook. It is a production-ready platform where you create, deploy, and manage persistent AI agents that work toward goals. Agents remember context across conversations, collaborate in teams, and follow the rules and knowledge you define in the Book language.
- Hosted at gallery.ptbk.io - start creating agents immediately
- Self-hosted via Docker - full control over your data and infrastructure
- API for integrating agents into your own applications
๐ Promptbook Engine
The Engine is the open-source core that powers the Agents Server. If you need to embed agent capabilities directly into your TypeScript/JavaScript application, you can use it as a standalone library.

โโ When to use Promptbook?
โ When to use
- When you want to deploy persistent AI agents that work on goals for your company
- When you need agents with specific personalities, knowledge, and rules tailored to your business
- When you want agents that collaborate in teams and consult each other
- When you need to integrate AI agents into your existing applications via API
- When you want to self-host your AI agents with full control over data and infrastructure
- When you are writing an app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
- When you want to version your agent definitions and test multiple versions
- When you want to log agent execution and backtrace issues
See more
โ When not to use
- When a single simple prompt already works fine for your job
- When OpenAI Assistant (GPTs) is enough for you
- When you need streaming (this may be implemented in the future, see discussion)
- When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
- When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
- When you need to use recursion (see the discussion)
See more
๐ Known issues
๐งผ Intentionally not implemented features
โ FAQ
If you have a question start a discussion, open an issue or write me an email.
๐
Changelog
See CHANGELOG.md
๐ License
This project is licensed under BUSL 1.1.
๐ค Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
You can also โญ star the project, follow us on GitHub or various other social networks.We are open to pull requests, feedback, and suggestions.
Need help with Book language? We're here for you!
We welcome contributions and feedback to make Book language better for everyone!