๐Ÿš€ Big News:Socket Has Acquired Secure Annex.Learn More โ†’
Socket
Book a DemoSign in
Socket

@promptbook/openai

Package Overview
Dependencies
Maintainers
1
Versions
919
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@promptbook/openai

Promptbook: Create persistent AI agents that turn your company's scattered knowledge into action

latest
Source
npmnpm
Version
0.112.0-57
Version published
Weekly downloads
1.3K
-4.37%
Maintainers
1
Weekly downloads
ย 
Created
Source

๐Ÿ™ Promptbook: Invisible AI Agents

Create persistent AI agents that turn your company's scattered knowledge into action - powered by the Agents Server

NPM Version of Promptbook logo Promptbook Quality of package Promptbook logo Promptbook Known Vulnerabilities ๐Ÿงช Test Books ๐Ÿงช Test build ๐Ÿงช Lint ๐Ÿงช Spell check ๐Ÿงช Test types Issues

๐Ÿš€ Quick deploy

Vercel

โš  Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.

๐Ÿ“ฆ Package @promptbook/openai

  • Promptbooks are divided into several packages, all are published from single monorepo.
  • This package @promptbook/openai is one part of the promptbook ecosystem.

To install this package, run:

# Install entire promptbook ecosystem
npm i ptbk

# Install just this package to save space
npm install @promptbook/openai

OpenAI integration for Promptbook, providing execution tools for OpenAI GPT models, OpenAI Assistants, and OpenAI-compatible APIs within the Promptbook ecosystem.

๐ŸŽฏ Purpose and Motivation

This package bridges the gap between Promptbook's unified pipeline execution system and OpenAI's powerful language models. It provides a standardized interface for accessing OpenAI's various services while maintaining compatibility with Promptbook's execution framework, enabling seamless integration with different OpenAI offerings.

๐Ÿ”ง High-Level Functionality

The package offers three main integration paths:

  • Standard OpenAI API: Direct integration with OpenAI's chat completions and embeddings
  • OpenAI Assistants: Integration with OpenAI's Assistant API (GPTs)
  • OpenAI-Compatible APIs: Support for third-party APIs that follow OpenAI's interface
  • Model Management: Automatic model selection and configuration
  • Usage Tracking: Built-in monitoring for tokens and costs

โœจ Key Features

  • ๐Ÿค– Multiple OpenAI Integrations - Support for standard API, Assistants, and compatible services
  • ๐Ÿ”„ Seamless Provider Switching - Easy integration with other LLM providers
  • ๐ŸŽฏ Model Selection - Access to all available OpenAI models with automatic selection
  • ๐Ÿ”ง Configuration Flexibility - Support for custom endpoints, API keys, and parameters
  • ๐Ÿ“Š Usage Tracking - Built-in token usage and cost monitoring
  • ๐Ÿ›ก๏ธ Error Handling - Comprehensive error handling and retry logic
  • ๐Ÿš€ Performance Optimization - Caching and request optimization
  • ๐Ÿ”Œ OpenAI-Compatible Server - Use Promptbook books as OpenAI-compatible models

๐Ÿงก Usage

import { createPipelineExecutor } from '@promptbook/core';
import {
    createPipelineCollectionFromDirectory,
    $provideExecutionToolsForNode,
    $provideFilesystemForNode,
    $provideScrapersForNode,
    $provideScriptingForNode,
} from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/javascript';
import { OpenAiExecutionTools } from '@promptbook/openai';

// ๐Ÿ›  Prepare the tools that will be used to compile and run your books
// Note: Here you can allow or deny some LLM providers, such as not providing DeepSeek for privacy reasons
const fs = $provideFilesystemForNode();
const llm = new OpenAiExecutionTools(
    //            <- TODO: [๐Ÿงฑ] Implement in a functional (not new Class) way
    {
        isVerbose: true,
        apiKey: process.env.OPENAI_API_KEY,
    },
);
const executables = await $provideExecutablesForNode();
const tools = {
    llm,
    fs,
    scrapers: await $provideScrapersForNode({ fs, llm, executables }),
    script: await $provideScriptingForNode({}),
};

// โ–ถ Create whole pipeline collection
const collection = await createPipelineCollectionFromDirectory('./books', tools);

// โ–ถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.book`);

// โ–ถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });

// โ–ถ Prepare input parameters
const inputParameters = { word: 'cat' };

// ๐Ÿš€โ–ถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters).asPromise({ isCrashedOnError: true });

// โ–ถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);

๐Ÿคบ Usage with OpenAI's Assistants (GPTs)

TODO: Write a guide how to use OpenAI's Assistants with Promptbook

๐Ÿง™โ€โ™‚๏ธ Wizard

Run books without any settings, boilerplate or struggle in Node.js:

import { wizard } from '@promptbook/wizard';

const {
    outputParameters: { joke },
} = await wizard.execute(`https://github.com/webgptorg/book/blob/main/books/templates/generic.book`, {
    topic: 'Prague',
});

console.info(joke);

๐Ÿง™โ€โ™‚๏ธ Connect to LLM providers automatically

You can just use $provideExecutionToolsForNode function to create all required tools from environment variables like ANTHROPIC_CLAUDE_API_KEY and OPENAI_API_KEY automatically.

import { createPipelineExecutor, createPipelineCollectionFromDirectory } from '@promptbook/core';
import { JavascriptExecutionTools } from '@promptbook/javascript';
import { $provideExecutionToolsForNode } from '@promptbook/node';
import { $provideFilesystemForNode } from '@promptbook/node';

// ๐Ÿ›  Prepare the tools that will be used to compile and run your books
// Note: Here you can allow or deny some LLM providers, such as not providing DeepSeek for privacy reasons
const tools = await $provideExecutionToolsForNode();

// โ–ถ Create whole pipeline collection
const collection = await createPipelineCollectionFromDirectory('./books', tools);

// โ–ถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.book`);

// โ–ถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });

// โ–ถ Prepare input parameters
const inputParameters = { word: 'dog' };

// ๐Ÿš€โ–ถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters).asPromise({ isCrashedOnError: true });

// โ–ถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);

๐Ÿ’• Usage of multiple LLM providers

You can use multiple LLM providers in one Promptbook execution. The best model will be chosen automatically according to the prompt and the model's capabilities.

import { createPipelineExecutor } from '@promptbook/core';
import {
    createPipelineCollectionFromDirectory,
    $provideExecutionToolsForNode,
    $provideFilesystemForNode,
} from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/javascript';
import { OpenAiExecutionTools } from '@promptbook/openai';
import { AnthropicClaudeExecutionTools } from '@promptbook/anthropic-claude';
import { AzureOpenAiExecutionTools } from '@promptbook/azure-openai';

// โ–ถ Prepare multiple tools
const fs = $provideFilesystemForNode();
const llm = [
    // Note: You can use multiple LLM providers in one Promptbook execution.
    //       The best model will be chosen automatically according to the prompt and the model's capabilities.
    new OpenAiExecutionTools(
        //            <- TODO: [๐Ÿงฑ] Implement in a functional (not new Class) way
        {
            apiKey: process.env.OPENAI_API_KEY,
        },
    ),
    new AnthropicClaudeExecutionTools(
        //            <- TODO: [๐Ÿงฑ] Implement in a functional (not new Class) way
        {
            apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
        },
    ),
    new AzureOpenAiExecutionTools(
        //            <- TODO: [๐Ÿงฑ] Implement in a functional (not new Class) way
        {
            resourceName: process.env.AZUREOPENAI_RESOURCE_NAME,
            deploymentName: process.env.AZUREOPENAI_DEPLOYMENT_NAME
            apiKey: process.env.AZUREOPENAI_API_KEY,
        },
    ),
];
const executables = await $provideExecutablesForNode();
const tools = {
    llm,
    fs,
    scrapers: await $provideScrapersForNode({ fs, llm, executables }),
    script: await $provideScriptingForNode({}),
};

// โ–ถ Create whole pipeline collection
const collection = await createPipelineCollectionFromDirectory('./books', tools);

// โ–ถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.book`);

// โ–ถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });

// โ–ถ Prepare input parameters
const inputParameters = { word: 'dog' };

// ๐Ÿš€โ–ถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters).asPromise({ isCrashedOnError: true });

// โ–ถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);

๐Ÿ’™ Integration with other models

See the other model integrations:

๐Ÿค– Using Promptbook as an OpenAI-compatible model

You can use Promptbook books as if they were OpenAI models by using the OpenAI-compatible endpoint. This allows you to use the standard OpenAI SDK with Promptbook books.

First, start the Promptbook server:

import { startRemoteServer } from '@promptbook/remote-server';

// Start the server
await startRemoteServer({
    port: 3000,
    collection: await createPipelineCollectionFromDirectory('./books'),
    isAnonymousModeAllowed: true,
    isApplicationModeAllowed: true,
});

Then use the standard OpenAI SDK with the server URL:

import OpenAI from 'openai';

// Create OpenAI client pointing to your Promptbook server
const openai = new OpenAI({
    baseURL: 'http://localhost:3000', // Your Promptbook server URL
    apiKey: 'not-needed', // API key is not needed for Promptbook
});

// Use any Promptbook book as a model
const response = await openai.chat.completions.create({
    model: 'https://promptbook.studio/my-collection/write-article.book', // Book URL as model name
    messages: [
        {
            role: 'user',
            content: 'Write a short story about a cat',
        },
    ],
});

console.log(response.choices[0].message.content);

This allows you to:

  • Use Promptbook books with any OpenAI-compatible client
  • Integrate Promptbook into existing OpenAI-based applications
  • Use Promptbook books as models in other AI frameworks

๐Ÿ“ฆ Exported Entities

Version Information

  • BOOK_LANGUAGE_VERSION - Current book language version
  • PROMPTBOOK_ENGINE_VERSION - Current engine version

Execution Tools Creation Functions

  • createOpenAiAssistantExecutionTools - Create OpenAI Assistant execution tools
  • createOpenAiCompatibleExecutionTools - Create OpenAI-compatible execution tools
  • createOpenAiExecutionTools - Create standard OpenAI execution tools

Model Information

  • OPENAI_MODELS - Available OpenAI models configuration

Execution Tools Classes

  • OpenAiAssistantExecutionTools - OpenAI Assistant execution tools class
  • OpenAiCompatibleExecutionTools - OpenAI-compatible execution tools class
  • OpenAiExecutionTools - Standard OpenAI execution tools class

Configuration Types

  • OpenAiAssistantExecutionToolsOptions - Configuration options for OpenAI Assistant tools (type)
  • OpenAiCompatibleExecutionToolsOptions - Configuration options for OpenAI-compatible tools (type)
  • OpenAiCompatibleExecutionToolsNonProxiedOptions - Non-proxied configuration options (type)
  • OpenAiCompatibleExecutionToolsProxiedOptions - Proxied configuration options (type)
  • OpenAiExecutionToolsOptions - Configuration options for standard OpenAI tools (type)

Provider Registrations

  • _OpenAiRegistration - Standard OpenAI provider registration
  • _OpenAiAssistantRegistration - OpenAI Assistant provider registration
  • _OpenAiCompatibleRegistration - OpenAI-compatible provider registration

๐Ÿ’ก This package provides OpenAI integration for promptbook applications. For the core functionality, see @promptbook/core or install all packages with npm i ptbk

Rest of the documentation is common for entire promptbook ecosystem:

๐Ÿ“– The Book Whitepaper

Promptbook lets you create persistent AI agents that work on real goals for your company. The Agents Server is the heart of the project - a place where your AI agents live, remember context, collaborate in teams, and get things done.

Nowadays, the biggest challenge for most business applications isn't the raw capabilities of AI models. Large language models such as GPT-5.2 and Claude-4.5 are incredibly capable.

The main challenge lies in managing the context, providing rules and knowledge, and narrowing the personality.

In Promptbook, you define your agents using simple Books - a human-readable language that is explicit, easy to understand and write, reliable, and highly portable. You then deploy them to the Agents Server, where they run persistently and work toward their goals.

Paul Smith

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email and keep the company website updated with the latest legal policies.
RULE You are knowledgeable, professional, and detail-oriented.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx
USE EMAIL
USE BROWSER
TEAM You are part of the legal team of Paul Smith & Associรฉs, you discuss with {Emily White}, the head of the compliance department. {George Brown} is expert in corporate law and {Sophia Black} is expert in labor law.

Aspects of great AI agent

We have created a language called Book, which allows you to write AI agents in their native language and create your own AI persona. Book provides a guide to define all the traits and commitments.

You can look at it as "prompting" (or writing a system message), but decorated by commitments.

Commitments are special syntax elements that define contracts between you and the AI agent. They are transformed by Promptbook Engine into low-level parameters like which model to use, its temperature, system message, RAG index, MCP servers, and many other parameters. For some commitments (for example RULE commitment) Promptbook Engine can even create adversary agents and extra checks to enforce the rules.

Persona commitment

Personas define the character of your AI persona, its role, and how it should interact with users. It sets the tone and style of communication.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.

Goal commitment

Goals define what the agent should actively work toward. Unlike a chatbot that only responds when asked, an agent with goals takes initiative and works on tasks persistently on the Agents Server.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email within 24 hours.
GOAL Keep the company website updated with the latest legal policies and compliance information.

Knowledge commitment

Knowledge Commitment allows you to provide specific information, facts, or context that the AI should be aware of when responding.

This can include domain-specific knowledge, company policies, or any other relevant information.

Promptbook Engine will automatically enforce this knowledge during interactions. When the knowledge is short enough, it will be included in the prompt. When it is too long, it will be stored in vector databases and RAG retrieved when needed. But you don't need to care about it.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email within 24 hours.
GOAL Keep the company website updated with the latest legal policies and compliance information.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx

Rule commitment

Rules will enforce specific behaviors or constraints on the AI's responses. This can include ethical guidelines, communication styles, or any other rules you want the AI to follow.

Depending on rule strictness, Promptbook will either propagate it to the prompt or use other techniques, like adversary agent, to enforce it.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email within 24 hours.
GOAL Keep the company website updated with the latest legal policies and compliance information.
RULE Always ensure compliance with local laws and regulations.
RULE Never provide legal advice outside your area of expertise.
RULE Never provide legal advice about criminal law.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx

Use commitments

Use commitments grant the agent real capabilities - tools it can use to interact with the outside world. USE EMAIL lets the agent send emails, USE BROWSER lets it access and read web content, USE SEARCH ENGINE lets it search the web, and many more.

These are what turn a chatbot into a persistent agent that actually does work.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email within 24 hours.
GOAL Keep the company website updated with the latest legal policies and compliance information.
RULE Always ensure compliance with local laws and regulations.
RULE Never provide legal advice outside your area of expertise.
RULE Never provide legal advice about criminal law.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx
USE EMAIL
USE BROWSER
USE SEARCH ENGINE

Team commitment

Team commitment allows you to define the team structure and advisory fellow members the AI can consult with. This allows the AI to simulate collaboration and consultation with other experts, enhancing the quality of its responses.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
GOAL Respond to incoming legal inquiries via email within 24 hours.
GOAL Keep the company website updated with the latest legal policies and compliance information.
RULE Always ensure compliance with local laws and regulations.
RULE Never provide legal advice outside your area of expertise.
RULE Never provide legal advice about criminal law.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx
USE EMAIL
USE BROWSER
USE SEARCH ENGINE
TEAM You are part of the legal team of Paul Smith & Associรฉs, you discuss with {Emily White}, the head of the compliance department. {George Brown} is expert in corporate law and {Sophia Black} is expert in labor law.

Promptbook Ecosystem

Promptbook is an ecosystem of tools centered around the Agents Server - a production-ready platform for running persistent AI agents.

Agents Server

The Agents Server is the primary way to use Promptbook. It is a web application where your AI agents live and work. You can create agents, give them knowledge and rules using the Book language, organize them into teams, and let them work on goals persistently. The Agents Server provides a UI for managing agents, an API for integrating them into your applications, and can be self-hosted via Docker or deployed on Vercel.

Promptbook Engine

The Promptbook Engine is the open-source core that powers everything. It parses the Book language, applies commitments, manages LLM provider integrations, and executes agents. The Agents Server is built on top of the Engine. If you need to embed agent capabilities directly into your own application, you can use the Engine as a standalone TypeScript/JavaScript library via NPM packages.

๐Ÿ’œ The Promptbook Project

Promptbook project is an ecosystem centered around the Agents Server - a platform for creating, deploying, and running persistent AI agents. Following is a list of the most important pieces of the project:

ProjectAbout
โญ Agents Server The primary way to use Promptbook. A production-ready platform where your AI agents live - create, manage, deploy, and interact with persistent agents that work on goals. Available as a hosted service or self-hosted via Docker.
Book language Human-friendly, high-level language that abstracts away low-level details of AI. It allows to focus on personality, behavior, knowledge, and rules of AI agents rather than on models, parameters, and prompt engineering. There is also a plugin for VSCode to support .book file extension
Promptbook Engine The open-source core that powers the Agents Server. Can also be used as a standalone TypeScript/JavaScript library to embed agent capabilities into your own applications. Released as multiple NPM packages.

๐ŸŒ Community & Social Media

Join our growing community of developers and users:

PlatformDescription
๐Ÿ’ฌ DiscordJoin our active developer community for discussions and support
๐Ÿ—ฃ๏ธ GitHub DiscussionsTechnical discussions, feature requests, and community Q&A
๐Ÿ‘” LinkedInProfessional updates and industry insights
๐Ÿ“ฑ FacebookGeneral announcements and community engagement
๐Ÿ”— ptbk.ioOfficial landing page with project information

๐Ÿ–ผ๏ธ Product & Brand Channels

Promptbook.studio

๐Ÿ“ธ Instagram @promptbook.studioVisual updates, UI showcases, and design inspiration

๐Ÿ“š Documentation

See detailed guides and API reference in the docs or online.

๐Ÿ”’ Security

For information on reporting security vulnerabilities, see our Security Policy.

๐Ÿ“ฆ Deployment & Packages

The fastest way to get started is with the Agents Server:

  • ๐Ÿ‹ Docker image - Self-host the Agents Server with full control over your data
  • โ˜๏ธ Hosted Agents Server - Start creating agents immediately, no setup required

NPM Packages (for developers embedding the Engine)

If you want to embed the Promptbook Engine directly into your application, the library is divided into several packages published from a single monorepo. You can install all of them at once:

npm i ptbk

Or you can install them separately:

โญ Marked packages are worth to try first

๐Ÿค– Promptbook Coder

ptbk coder is Promptbook's workflow layer for AI-assisted software changes. Instead of opening one chat and manually copy-pasting tasks, you keep a queue of coding prompts in prompts/*.md, let a coding agent execute the next ready task, and then verify the result before archiving the prompt.

Promptbook Coder is not another standalone coding model. It is an orchestration layer over coding agents such as GitHub Copilot, OpenAI Codex, Claude Code, Opencode, Cline, and Gemini CLI. The difference is that Promptbook Coder adds a repeatable repository workflow on top of them:

  • prompt files with explicit statuses like [ ], [x], and [-]
  • automatic selection of the next runnable task, including priority support
  • optional shared repo context loaded from a file such as AGENTS.md
  • automatic git add, commit, and push after each successful prompt
  • dedicated coding-agent Git identity and optional GPG signing
  • verification and repair flow for work that is done, partial, or broken
  • helper commands for generating boilerplates and finding refactor prompts

In short: tools like Claude Code, Codex, or GitHub Copilot are the engines; Promptbook Coder is the workflow that keeps coding work structured, reviewable, and repeatable across many prompts.

How the workflow works

  • ptbk coder init prepares the project for the coder workflow, seeds project-owned generic templates in prompts/templates/, creates a starter AGENTS.md context file, adds helper npm run coder:* scripts, ensures .gitignore ignores /.tmp, and configures VS Code prompt screenshots in prompts/screenshots/.
  • ptbk coder generate-boilerplates creates prompt files in prompts/.
  • You replace placeholder @@@ sections with real coding tasks.
  • ptbk coder run sends the next ready [ ] prompt to the selected coding agent.
  • Promptbook Coder marks the prompt as done [x], records runner metadata, then stages, commits, and pushes the resulting changes.
  • ptbk coder verify reviews completed prompts, archives finished files to prompts/done/, and appends a repair prompt when more work is needed.

Prompts marked with [-] are not ready yet, prompts containing @@@ are treated as not fully written, and prompts with more ! markers have higher priority.

Features

  • Multi-runner execution: openai-codex, github-copilot, cline, claude-code, opencode, gemini
  • Context injection: --context AGENTS.md or inline extra instructions
  • Reasoning control: --thinking-level low|medium|high|xhigh for supported runners
  • Interactive or unattended runs: default wait mode, or --no-wait for batch execution
  • Git safety: clean working tree check by default, optional --ignore-git-changes
  • Opt-in remote pushes: commits stay local unless you explicitly pass --auto-push
  • Prompt triage: --priority to process only more important tasks first
  • Failure logging: failed runs write a neighboring .error.log
  • Line-ending normalization: changed files are normalized back to LF by default

Local usage in this repository

When working on Promptbook itself, the repository usually runs the CLI straight from source:

npx ts-node ./src/cli/test/ptbk.ts coder init

npx ts-node ./src/cli/test/ptbk.ts coder generate-boilerplates --template prompts/templates/common.md

npx ts-node ./src/cli/test/ptbk.ts coder generate-boilerplates --template prompts/templates/agents-server.md

npx ts-node ./src/cli/test/ptbk.ts coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md

npx ts-node ./src/cli/test/ptbk.ts coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --auto-push

npx ts-node ./src/cli/test/ptbk.ts coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --ignore-git-changes --no-wait

npx ts-node ./src/cli/test/ptbk.ts coder find-refactor-candidates

npx ts-node ./src/cli/test/ptbk.ts coder find-refactor-candidates --level xhigh

npx ts-node ./src/cli/test/ptbk.ts coder verify

Using ptbk coder in an external project

If you want to use the workflow in another repository, install the package and invoke the ptbk binary. After local installation, npx ptbk ... is the most portable form; plain ptbk ... also works when your environment exposes the local binary on PATH.

npm install ptbk

ptbk coder init

npx ptbk coder generate-boilerplates

npx ptbk coder generate-boilerplates --template prompts/templates/common.md

npx ptbk coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --test npm run test

npx ptbk coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --auto-push

npx ptbk coder run --agent github-copilot --model gpt-5.4 --thinking-level xhigh --context AGENTS.md --test npm run test --ignore-git-changes --no-wait

npx ptbk coder find-refactor-candidates

npx ptbk coder find-refactor-candidates --level xhigh

npx ptbk coder verify

ptbk coder init also bootstraps a starter AGENTS.md, adds package.json scripts for the four main coder commands, adds the coder temp ignore to .gitignore, and configures .vscode/settings.json so pasted images from prompts/*.md land in prompts/screenshots/.

What each command does

| Command | What it does | | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --- | ------ | ---- | ----- | ------------------------------------------------------------------------ | | ptbk coder init | Creates prompts/, prompts/done/, the project-generic template files materialized in prompts/templates/ (currently common.md), and a starter AGENTS.md; ensures .env contains CODING_AGENT_GIT_NAME, CODING_AGENT_GIT_EMAIL, and CODING_AGENT_GIT_SIGNING_KEY; adds helper coder scripts to package.json; ensures .gitignore contains /.tmp; and configures .vscode/settings.json to save pasted prompt images into prompts/screenshots/. | | ptbk coder generate-boilerplates | Creates new prompt markdown files with fresh emoji tags so you can quickly fill in coding tasks; --template accepts either a built-in alias or a markdown file path relative to the project root. | | ptbk coder run | Picks the next ready prompt, appends optional context, runs it through the selected coding agent, can optionally verify each attempt with a shell test command and feed failing output back for retries, then marks success or failure, commits the result, and pushes only when --auto-push is enabled. | | ptbk coder find-refactor-candidates | Scans the repository for oversized or overpacked files and writes prompt files for likely refactors; --level <xlow | low | medium | high | xhigh | extreme> ranges from a very benevolent scan to a very aggressive sweep. | | ptbk coder verify | Walks through completed prompts, archives truly finished work, and adds follow-up repair prompts for unfinished results. |

Most useful ptbk coder run flags

FlagPurpose
--agent <name>Selects the coding backend.
--model <model>Chooses the runner model; required for openai-codex and gemini, optional for github-copilot.
--context <text-or-file>Appends extra instructions inline or from a file like AGENTS.md.
--test <command>Runs a verification command after each prompt attempt and feeds failing output back for retries.
--thinking-level <level>Sets reasoning effort for supported runners.
--no-waitSkips interactive pauses between prompts for unattended execution.
--ignore-git-changesDisables the clean-working-tree guard.
--priority <n>Runs only prompts at or above the given priority.
--dry-runPrints which prompts are ready instead of executing them.
--allow-creditsLets OpenAI Codex spend credits when required.
--auto-pushPushes each successful coding-agent commit to the configured remote.
--auto-migrateRuns testing-server database migrations after each successful prompt.

Typical usage pattern

  • Initialize once with ptbk coder init.
  • Customize prompts/templates/*.md if needed, then create or write prompt files in prompts/.
  • Customize the starter AGENTS.md with repository-specific instructions, then pass --context AGENTS.md.
  • Run one prompt at a time interactively, or use --no-wait for unattended batches.
  • Finish with ptbk coder verify so resolved prompts are archived and broken ones get explicit repair follow-ups.

๐Ÿ“š Dictionary

The following glossary is used to clarify certain concepts:

General LLM / AI terms

  • Prompt drift is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
  • Pipeline, workflow scenario or chain is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
  • Fine-tuning is a process where a pre-trained AI model is further trained on a specific dataset to improve its performance on a specific task.
  • Zero-shot learning is a machine learning paradigm where a model is trained to perform a task without any labeled examples. Instead, the model is provided with a description of the task and is expected to generate the correct output.
  • Few-shot learning is a machine learning paradigm where a model is trained to perform a task with only a few labeled examples. This is in contrast to traditional machine learning, where models are trained on large datasets.
  • Meta-learning is a machine learning paradigm where a model is trained on a variety of tasks and is able to learn new tasks with minimal additional training. This is achieved by learning a set of meta-parameters that can be quickly adapted to new tasks.
  • Retrieval-augmented generation is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
  • Longtail refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.

Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook

๐Ÿ’ฏ Core concepts

Advanced concepts

Data & Knowledge ManagementPipeline Control
Language & Output ControlAdvanced Generation

๐Ÿ” View more concepts

๏ฟฝ Agents Server

The Agents Server is the primary way to use Promptbook. It is a production-ready platform where you create, deploy, and manage persistent AI agents that work toward goals. Agents remember context across conversations, collaborate in teams, and follow the rules and knowledge you define in the Book language.

  • Hosted at gallery.ptbk.io - start creating agents immediately
  • Self-hosted via Docker - full control over your data and infrastructure
  • API for integrating agents into your own applications

๐Ÿš‚ Promptbook Engine

The Engine is the open-source core that powers the Agents Server. If you need to embed agent capabilities directly into your TypeScript/JavaScript application, you can use it as a standalone library.

Schema of Promptbook Engine

โž•โž– When to use Promptbook?

โž• When to use

  • When you want to deploy persistent AI agents that work on goals for your company
  • When you need agents with specific personalities, knowledge, and rules tailored to your business
  • When you want agents that collaborate in teams and consult each other
  • When you need to integrate AI agents into your existing applications via API
  • When you want to self-host your AI agents with full control over data and infrastructure
  • When you are writing an app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
  • When you want to version your agent definitions and test multiple versions
  • When you want to log agent execution and backtrace issues

See more

โž– When not to use

  • When a single simple prompt already works fine for your job
  • When OpenAI Assistant (GPTs) is enough for you
  • When you need streaming (this may be implemented in the future, see discussion)
  • When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
  • When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
  • When you need to use recursion (see the discussion)

See more

๐Ÿœ Known issues

๐Ÿงผ Intentionally not implemented features

โ” FAQ

If you have a question start a discussion, open an issue or write me an email.

๐Ÿ“… Changelog

See CHANGELOG.md

๐Ÿ“œ License

This project is licensed under BUSL 1.1.

๐Ÿค Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

You can also โญ star the project, follow us on GitHub or various other social networks.We are open to pull requests, feedback, and suggestions.

๐Ÿ†˜ Support & Community

Need help with Book language? We're here for you!

We welcome contributions and feedback to make Book language better for everyone!

Keywords

ai

FAQs

Package last updated on 02 May 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts