
Security News
CVE Volume Surges Past 48,000 in 2025 as WordPress Plugin Ecosystem Drives Growth
CVE disclosures hit a record 48,185 in 2025, driven largely by vulnerabilities in third-party WordPress plugins.
@promptbook/openai
Advanced tools
Turn your company's scattered knowledge into AI ready Books
โ Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.
@promptbook/openai@promptbook/openai is one part of the promptbook ecosystem.To install this package, run:
# Install entire promptbook ecosystem
npm i ptbk
# Install just this package to save space
npm install @promptbook/openai
OpenAI integration for Promptbook, providing execution tools for OpenAI GPT models, OpenAI Assistants, and OpenAI-compatible APIs within the Promptbook ecosystem.
This package bridges the gap between Promptbook's unified pipeline execution system and OpenAI's powerful language models. It provides a standardized interface for accessing OpenAI's various services while maintaining compatibility with Promptbook's execution framework, enabling seamless integration with different OpenAI offerings.
The package offers three main integration paths:
import { createPipelineExecutor } from '@promptbook/core';
import {
createPipelineCollectionFromDirectory,
$provideExecutionToolsForNode,
$provideFilesystemForNode,
$provideScrapersForNode,
$provideScriptingForNode,
} from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/javascript';
import { OpenAiExecutionTools } from '@promptbook/openai';
// ๐ Prepare the tools that will be used to compile and run your books
// Note: Here you can allow or deny some LLM providers, such as not providing DeepSeek for privacy reasons
const fs = $provideFilesystemForNode();
const llm = new OpenAiExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
isVerbose: true,
apiKey: process.env.OPENAI_API_KEY,
},
);
const executables = await $provideExecutablesForNode();
const tools = {
llm,
fs,
scrapers: await $provideScrapersForNode({ fs, llm, executables }),
script: await $provideScriptingForNode({}),
};
// โถ Create whole pipeline collection
const collection = await createPipelineCollectionFromDirectory('./books', tools);
// โถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.book`);
// โถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
// โถ Prepare input parameters
const inputParameters = { word: 'cat' };
// ๐โถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters).asPromise({ isCrashedOnError: true });
// โถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
TODO: Write a guide how to use OpenAI's Assistants with Promptbook
Run books without any settings, boilerplate or struggle in Node.js:
import { wizard } from '@promptbook/wizard';
const {
outputParameters: { joke },
} = await wizard.execute(`https://github.com/webgptorg/book/blob/main/books/templates/generic.book`, {
topic: 'Prague',
});
console.info(joke);
You can just use $provideExecutionToolsForNode function to create all required tools from environment variables like ANTHROPIC_CLAUDE_API_KEY and OPENAI_API_KEY automatically.
import { createPipelineExecutor, createPipelineCollectionFromDirectory } from '@promptbook/core';
import { JavascriptExecutionTools } from '@promptbook/javascript';
import { $provideExecutionToolsForNode } from '@promptbook/node';
import { $provideFilesystemForNode } from '@promptbook/node';
// ๐ Prepare the tools that will be used to compile and run your books
// Note: Here you can allow or deny some LLM providers, such as not providing DeepSeek for privacy reasons
const tools = await $provideExecutionToolsForNode();
// โถ Create whole pipeline collection
const collection = await createPipelineCollectionFromDirectory('./books', tools);
// โถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.book`);
// โถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
// โถ Prepare input parameters
const inputParameters = { word: 'dog' };
// ๐โถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters).asPromise({ isCrashedOnError: true });
// โถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
You can use multiple LLM providers in one Promptbook execution. The best model will be chosen automatically according to the prompt and the model's capabilities.
import { createPipelineExecutor } from '@promptbook/core';
import {
createPipelineCollectionFromDirectory,
$provideExecutionToolsForNode,
$provideFilesystemForNode,
} from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/javascript';
import { OpenAiExecutionTools } from '@promptbook/openai';
import { AnthropicClaudeExecutionTools } from '@promptbook/anthropic-claude';
import { AzureOpenAiExecutionTools } from '@promptbook/azure-openai';
// โถ Prepare multiple tools
const fs = $provideFilesystemForNode();
const llm = [
// Note: You can use multiple LLM providers in one Promptbook execution.
// The best model will be chosen automatically according to the prompt and the model's capabilities.
new OpenAiExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
apiKey: process.env.OPENAI_API_KEY,
},
),
new AnthropicClaudeExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
},
),
new AzureOpenAiExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
resourceName: process.env.AZUREOPENAI_RESOURCE_NAME,
deploymentName: process.env.AZUREOPENAI_DEPLOYMENT_NAME
apiKey: process.env.AZUREOPENAI_API_KEY,
},
),
];
const executables = await $provideExecutablesForNode();
const tools = {
llm,
fs,
scrapers: await $provideScrapersForNode({ fs, llm, executables }),
script: await $provideScriptingForNode({}),
};
// โถ Create whole pipeline collection
const collection = await createPipelineCollectionFromDirectory('./books', tools);
// โถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.book`);
// โถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
// โถ Prepare input parameters
const inputParameters = { word: 'dog' };
// ๐โถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters).asPromise({ isCrashedOnError: true });
// โถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
See the other model integrations:
You can use Promptbook books as if they were OpenAI models by using the OpenAI-compatible endpoint. This allows you to use the standard OpenAI SDK with Promptbook books.
First, start the Promptbook server:
import { startRemoteServer } from '@promptbook/remote-server';
// Start the server
await startRemoteServer({
port: 3000,
collection: await createPipelineCollectionFromDirectory('./books'),
isAnonymousModeAllowed: true,
isApplicationModeAllowed: true,
});
Then use the standard OpenAI SDK with the server URL:
import OpenAI from 'openai';
// Create OpenAI client pointing to your Promptbook server
const openai = new OpenAI({
baseURL: 'http://localhost:3000', // Your Promptbook server URL
apiKey: 'not-needed', // API key is not needed for Promptbook
});
// Use any Promptbook book as a model
const response = await openai.chat.completions.create({
model: 'https://promptbook.studio/my-collection/write-article.book', // Book URL as model name
messages: [
{
role: 'user',
content: 'Write a short story about a cat',
},
],
});
console.log(response.choices[0].message.content);
This allows you to:
BOOK_LANGUAGE_VERSION - Current book language versionPROMPTBOOK_ENGINE_VERSION - Current engine versioncreateOpenAiAssistantExecutionTools - Create OpenAI Assistant execution toolscreateOpenAiCompatibleExecutionTools - Create OpenAI-compatible execution toolscreateOpenAiExecutionTools - Create standard OpenAI execution toolsOPENAI_MODELS - Available OpenAI models configurationOpenAiAssistantExecutionTools - OpenAI Assistant execution tools classOpenAiCompatibleExecutionTools - OpenAI-compatible execution tools classOpenAiExecutionTools - Standard OpenAI execution tools classOpenAiAssistantExecutionToolsOptions - Configuration options for OpenAI Assistant tools (type)OpenAiCompatibleExecutionToolsOptions - Configuration options for OpenAI-compatible tools (type)OpenAiCompatibleExecutionToolsNonProxiedOptions - Non-proxied configuration options (type)OpenAiCompatibleExecutionToolsProxiedOptions - Proxied configuration options (type)OpenAiExecutionToolsOptions - Configuration options for standard OpenAI tools (type)_OpenAiRegistration - Standard OpenAI provider registration_OpenAiAssistantRegistration - OpenAI Assistant provider registration_OpenAiCompatibleRegistration - OpenAI-compatible provider registration๐ก This package provides OpenAI integration for promptbook applications. For the core functionality, see @promptbook/core or install all packages with
npm i ptbk
Rest of the documentation is common for entire promptbook ecosystem:
Nowadays, the biggest challenge for most business applications isn't the raw capabilities of AI models. Large language models such as GPT-5.2 and Claude-4.5 are incredibly capable.
The main challenge lies in managing the context, providing rules and knowledge, and narrowing the personality.
In Promptbook, you can define your context using simple Books that are very explicit, easy to understand and write, reliable, and highly portable.
|
Paul Smith |
We have created a language called Book, which allows you to write AI agents in their native language and create your own AI persona. Book provides a guide to define all the traits and commitments.
You can look at it as "prompting" (or writing a system message), but decorated by commitments.
Commitments are special syntax elements that define contracts between you and the AI agent. They are transformed by Promptbook Engine into low-level parameters like which model to use, its temperature, system message, RAG index, MCP servers, and many other parameters. For some commitments (for example RULE commitment) Promptbook Engine can even create adversary agents and extra checks to enforce the rules.
Persona commitmentPersonas define the character of your AI persona, its role, and how it should interact with users. It sets the tone and style of communication.
|
Paul Smith & Associรฉs |
Knowledge commitmentKnowledge Commitment allows you to provide specific information, facts, or context that the AI should be aware of when responding.
This can include domain-specific knowledge, company policies, or any other relevant information.
Promptbook Engine will automatically enforce this knowledge during interactions. When the knowledge is short enough, it will be included in the prompt. When it is too long, it will be stored in vector databases and RAG retrieved when needed. But you don't need to care about it.
|
Paul Smith & Associรฉs |
Rule commitmentRules will enforce specific behaviors or constraints on the AI's responses. This can include ethical guidelines, communication styles, or any other rules you want the AI to follow.
Depending on rule strictness, Promptbook will either propagate it to the prompt or use other techniques, like adversary agent, to enforce it.
|
Paul Smith & Associรฉs |
Team commitmentTeam commitment allows you to define the team structure and advisory fellow members the AI can consult with. This allows the AI to simulate collaboration and consultation with other experts, enhancing the quality of its responses.
|
Paul Smith & Associรฉs |
!!!@@@
!!!@@@
!!!@@@
Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:
| Project | About |
|---|---|
| Agents Server | Place where you "AI agents live". It allows to create, manage, deploy, and interact with AI agents created in Book language. |
| Book language |
Human-friendly, high-level language that abstracts away low-level details of AI. It allows to focus on personality, behavior, knowledge, and rules of AI agents rather than on models, parameters, and prompt engineering.
There is also a plugin for VSCode to support .book file extension
|
| Promptbook Engine | Promptbook engine can run AI agents based on Book language. It is released as multiple NPM packages and Promptbook Agent Server as Docker Package Agent Server is based on Promptbook Engine. |
Join our growing community of developers and users:
| Platform | Description |
|---|---|
| ๐ฌ Discord | Join our active developer community for discussions and support |
| ๐ฃ๏ธ GitHub Discussions | Technical discussions, feature requests, and community Q&A |
| ๐ LinkedIn | Professional updates and industry insights |
| ๐ฑ Facebook | General announcements and community engagement |
| ๐ ptbk.io | Official landing page with project information |
| ๐ธ Instagram @promptbook.studio | Visual updates, UI showcases, and design inspiration |
See detailed guides and API reference in the docs or online.
For information on reporting security vulnerabilities, see our Security Policy.
This library is divided into several packages, all are published from single monorepo. You can install all of them at once:
npm i ptbk
Or you can install them separately:
โญ Marked packages are worth to try first
โญ ptbk - Bundle of all packages, when you want to install everything and you don't care about the size
promptbook - Same as ptbk
โญ๐งโโ๏ธ @promptbook/wizard - Wizard to just run the books in node without any struggle
@promptbook/core - Core of the library, it contains the main logic for promptbooks
@promptbook/node - Core of the library for Node.js environment
@promptbook/browser - Core of the library for browser environment
โญ @promptbook/utils - Utility functions used in the library but also useful for individual use in preprocessing and postprocessing LLM inputs and outputs
@promptbook/markdown-utils - Utility functions used for processing markdown
(Not finished) @promptbook/wizard - Wizard for creating+running promptbooks in single line
@promptbook/javascript - Execution tools for javascript inside promptbooks
@promptbook/openai - Execution tools for OpenAI API, wrapper around OpenAI SDK
@promptbook/anthropic-claude - Execution tools for Anthropic Claude API, wrapper around Anthropic Claude SDK
@promptbook/vercel - Adapter for Vercel functionalities
@promptbook/google - Integration with Google's Gemini API
@promptbook/deepseek - Integration with DeepSeek API
@promptbook/ollama - Integration with Ollama API
@promptbook/azure-openai - Execution tools for Azure OpenAI API
@promptbook/fake-llm - Mocked execution tools for testing the library and saving the tokens
@promptbook/remote-client - Remote client for remote execution of promptbooks
@promptbook/remote-server - Remote server for remote execution of promptbooks
@promptbook/pdf - Read knowledge from .pdf documents
@promptbook/documents - Integration of Markitdown by Microsoft
@promptbook/documents - Read knowledge from documents like .docx, .odt,โฆ
@promptbook/legacy-documents - Read knowledge from legacy documents like .doc, .rtf,โฆ
@promptbook/website-crawler - Crawl knowledge from the web
@promptbook/editable - Editable book as native javascript object with imperative object API
@promptbook/templates - Useful templates and examples of books which can be used as a starting point
@promptbook/types - Just typescript types used in the library
@promptbook/color - Color manipulation library
โญ @promptbook/cli - Command line interface utilities for promptbooks
๐ Docker image - Promptbook server
The following glossary is used to clarify certain concepts:
Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook
| Data & Knowledge Management | Pipeline Control |
|---|---|
|
|
| Language & Output Control | Advanced Generation |
|
|
If you have a question start a discussion, open an issue or write me an email.
See CHANGELOG.md
This project is licensed under BUSL 1.1.
We welcome contributions! See CONTRIBUTING.md for guidelines.
You can also โญ star the project, follow us on GitHub or various other social networks.We are open to pull requests, feedback, and suggestions.
Need help with Book language? We're here for you!
We welcome contributions and feedback to make Book language better for everyone!
FAQs
Promptbook: Turn your company's scattered knowledge into AI ready books
We found that @promptbook/openai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.ย It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
CVE disclosures hit a record 48,185 in 2025, driven largely by vulnerabilities in third-party WordPress plugins.

Security News
Socket CEO Feross Aboukhadijeh joins Insecure Agents to discuss CVE remediation and why supply chain attacks require a different security approach.

Security News
Tailwind Labs laid off 75% of its engineering team after revenue dropped 80%, as LLMs redirect traffic away from documentation where developers discover paid products.