Socket
Book a DemoInstallSign in
Socket

@promptbook/openai

Package Overview
Dependencies
Maintainers
1
Versions
836
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@promptbook/openai

Promptbook: Turn your company's scattered knowledge into AI ready books

Source
npmnpm
Version
0.105.0-5
Version published
Maintainers
1
Created
Source

โœจ Promptbook: AI Agents

Turn your company's scattered knowledge into AI ready Books

NPM Version of Promptbook logo Promptbook Quality of package Promptbook logo Promptbook Known Vulnerabilities ๐Ÿงช Test Books ๐Ÿงช Test build ๐Ÿงช Lint ๐Ÿงช Spell check ๐Ÿงช Test types Issues

๐ŸŒŸ New Features

  • Gemini 3 Support
โš  Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.

๐Ÿ“ฆ Package @promptbook/openai

To install this package, run:

# Install entire promptbook ecosystem
npm i ptbk

# Install just this package to save space
npm install @promptbook/openai

OpenAI integration for Promptbook, providing execution tools for OpenAI GPT models, OpenAI Assistants, and OpenAI-compatible APIs within the Promptbook ecosystem.

๐ŸŽฏ Purpose and Motivation

This package bridges the gap between Promptbook's unified pipeline execution system and OpenAI's powerful language models. It provides a standardized interface for accessing OpenAI's various services while maintaining compatibility with Promptbook's execution framework, enabling seamless integration with different OpenAI offerings.

๐Ÿ”ง High-Level Functionality

The package offers three main integration paths:

  • Standard OpenAI API: Direct integration with OpenAI's chat completions and embeddings
  • OpenAI Assistants: Integration with OpenAI's Assistant API (GPTs)
  • OpenAI-Compatible APIs: Support for third-party APIs that follow OpenAI's interface
  • Model Management: Automatic model selection and configuration
  • Usage Tracking: Built-in monitoring for tokens and costs

โœจ Key Features

  • ๐Ÿค– Multiple OpenAI Integrations - Support for standard API, Assistants, and compatible services
  • ๐Ÿ”„ Seamless Provider Switching - Easy integration with other LLM providers
  • ๐ŸŽฏ Model Selection - Access to all available OpenAI models with automatic selection
  • ๐Ÿ”ง Configuration Flexibility - Support for custom endpoints, API keys, and parameters
  • ๐Ÿ“Š Usage Tracking - Built-in token usage and cost monitoring
  • ๐Ÿ›ก๏ธ Error Handling - Comprehensive error handling and retry logic
  • ๐Ÿš€ Performance Optimization - Caching and request optimization
  • ๐Ÿ”Œ OpenAI-Compatible Server - Use Promptbook books as OpenAI-compatible models

๐Ÿงก Usage

import { createPipelineExecutor } from '@promptbook/core';
import {
    createPipelineCollectionFromDirectory,
    $provideExecutionToolsForNode,
    $provideFilesystemForNode,
    $provideScrapersForNode,
    $provideScriptingForNode,
} from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/javascript';
import { OpenAiExecutionTools } from '@promptbook/openai';

// ๐Ÿ›  Prepare the tools that will be used to compile and run your books
// Note: Here you can allow or deny some LLM providers, such as not providing DeepSeek for privacy reasons
const fs = $provideFilesystemForNode();
const llm = new OpenAiExecutionTools(
    //            <- TODO: [๐Ÿงฑ] Implement in a functional (not new Class) way
    {
        isVerbose: true,
        apiKey: process.env.OPENAI_API_KEY,
    },
);
const executables = await $provideExecutablesForNode();
const tools = {
    llm,
    fs,
    scrapers: await $provideScrapersForNode({ fs, llm, executables }),
    script: await $provideScriptingForNode({}),
};

// โ–ถ Create whole pipeline collection
const collection = await createPipelineCollectionFromDirectory('./books', tools);

// โ–ถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.book`);

// โ–ถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });

// โ–ถ Prepare input parameters
const inputParameters = { word: 'cat' };

// ๐Ÿš€โ–ถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters).asPromise({ isCrashedOnError: true });

// โ–ถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);

๐Ÿคบ Usage with OpenAI's Assistants (GPTs)

TODO: Write a guide how to use OpenAI's Assistants with Promptbook

๐Ÿง™โ€โ™‚๏ธ Wizard

Run books without any settings, boilerplate or struggle in Node.js:

import { wizard } from '@promptbook/wizard';

const {
    outputParameters: { joke },
} = await wizard.execute(`https://github.com/webgptorg/book/blob/main/books/templates/generic.book`, {
    topic: 'Prague',
});

console.info(joke);

๐Ÿง™โ€โ™‚๏ธ Connect to LLM providers automatically

You can just use $provideExecutionToolsForNode function to create all required tools from environment variables like ANTHROPIC_CLAUDE_API_KEY and OPENAI_API_KEY automatically.

import { createPipelineExecutor, createPipelineCollectionFromDirectory } from '@promptbook/core';
import { JavascriptExecutionTools } from '@promptbook/javascript';
import { $provideExecutionToolsForNode } from '@promptbook/node';
import { $provideFilesystemForNode } from '@promptbook/node';

// ๐Ÿ›  Prepare the tools that will be used to compile and run your books
// Note: Here you can allow or deny some LLM providers, such as not providing DeepSeek for privacy reasons
const tools = await $provideExecutionToolsForNode();

// โ–ถ Create whole pipeline collection
const collection = await createPipelineCollectionFromDirectory('./books', tools);

// โ–ถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.book`);

// โ–ถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });

// โ–ถ Prepare input parameters
const inputParameters = { word: 'dog' };

// ๐Ÿš€โ–ถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters).asPromise({ isCrashedOnError: true });

// โ–ถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);

๐Ÿ’• Usage of multiple LLM providers

You can use multiple LLM providers in one Promptbook execution. The best model will be chosen automatically according to the prompt and the model's capabilities.

import { createPipelineExecutor } from '@promptbook/core';
import {
    createPipelineCollectionFromDirectory,
    $provideExecutionToolsForNode,
    $provideFilesystemForNode,
} from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/javascript';
import { OpenAiExecutionTools } from '@promptbook/openai';
import { AnthropicClaudeExecutionTools } from '@promptbook/anthropic-claude';
import { AzureOpenAiExecutionTools } from '@promptbook/azure-openai';

// โ–ถ Prepare multiple tools
const fs = $provideFilesystemForNode();
const llm = [
    // Note: You can use multiple LLM providers in one Promptbook execution.
    //       The best model will be chosen automatically according to the prompt and the model's capabilities.
    new OpenAiExecutionTools(
        //            <- TODO: [๐Ÿงฑ] Implement in a functional (not new Class) way
        {
            apiKey: process.env.OPENAI_API_KEY,
        },
    ),
    new AnthropicClaudeExecutionTools(
        //            <- TODO: [๐Ÿงฑ] Implement in a functional (not new Class) way
        {
            apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
        },
    ),
    new AzureOpenAiExecutionTools(
        //            <- TODO: [๐Ÿงฑ] Implement in a functional (not new Class) way
        {
            resourceName: process.env.AZUREOPENAI_RESOURCE_NAME,
            deploymentName: process.env.AZUREOPENAI_DEPLOYMENT_NAME
            apiKey: process.env.AZUREOPENAI_API_KEY,
        },
    ),
];
const executables = await $provideExecutablesForNode();
const tools = {
    llm,
    fs,
    scrapers: await $provideScrapersForNode({ fs, llm, executables }),
    script: await $provideScriptingForNode({}),
};

// โ–ถ Create whole pipeline collection
const collection = await createPipelineCollectionFromDirectory('./books', tools);

// โ–ถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.book`);

// โ–ถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });

// โ–ถ Prepare input parameters
const inputParameters = { word: 'dog' };

// ๐Ÿš€โ–ถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters).asPromise({ isCrashedOnError: true });

// โ–ถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);

๐Ÿ’™ Integration with other models

See the other model integrations:

๐Ÿค– Using Promptbook as an OpenAI-compatible model

You can use Promptbook books as if they were OpenAI models by using the OpenAI-compatible endpoint. This allows you to use the standard OpenAI SDK with Promptbook books.

First, start the Promptbook server:

import { startRemoteServer } from '@promptbook/remote-server';

// Start the server
await startRemoteServer({
    port: 3000,
    collection: await createPipelineCollectionFromDirectory('./books'),
    isAnonymousModeAllowed: true,
    isApplicationModeAllowed: true,
});

Then use the standard OpenAI SDK with the server URL:

import OpenAI from 'openai';

// Create OpenAI client pointing to your Promptbook server
const openai = new OpenAI({
    baseURL: 'http://localhost:3000', // Your Promptbook server URL
    apiKey: 'not-needed', // API key is not needed for Promptbook
});

// Use any Promptbook book as a model
const response = await openai.chat.completions.create({
    model: 'https://promptbook.studio/my-collection/write-article.book', // Book URL as model name
    messages: [
        {
            role: 'user',
            content: 'Write a short story about a cat',
        },
    ],
});

console.log(response.choices[0].message.content);

This allows you to:

  • Use Promptbook books with any OpenAI-compatible client
  • Integrate Promptbook into existing OpenAI-based applications
  • Use Promptbook books as models in other AI frameworks

๐Ÿ“ฆ Exported Entities

Version Information

  • BOOK_LANGUAGE_VERSION - Current book language version
  • PROMPTBOOK_ENGINE_VERSION - Current engine version

Execution Tools Creation Functions

  • createOpenAiAssistantExecutionTools - Create OpenAI Assistant execution tools
  • createOpenAiCompatibleExecutionTools - Create OpenAI-compatible execution tools
  • createOpenAiExecutionTools - Create standard OpenAI execution tools

Model Information

  • OPENAI_MODELS - Available OpenAI models configuration

Execution Tools Classes

  • OpenAiAssistantExecutionTools - OpenAI Assistant execution tools class
  • OpenAiCompatibleExecutionTools - OpenAI-compatible execution tools class
  • OpenAiExecutionTools - Standard OpenAI execution tools class

Configuration Types

  • OpenAiAssistantExecutionToolsOptions - Configuration options for OpenAI Assistant tools (type)
  • OpenAiCompatibleExecutionToolsOptions - Configuration options for OpenAI-compatible tools (type)
  • OpenAiCompatibleExecutionToolsNonProxiedOptions - Non-proxied configuration options (type)
  • OpenAiCompatibleExecutionToolsProxiedOptions - Proxied configuration options (type)
  • OpenAiExecutionToolsOptions - Configuration options for standard OpenAI tools (type)

Provider Registrations

  • _OpenAiRegistration - Standard OpenAI provider registration
  • _OpenAiAssistantRegistration - OpenAI Assistant provider registration
  • _OpenAiCompatibleRegistration - OpenAI-compatible provider registration

๐Ÿ’ก This package provides OpenAI integration for promptbook applications. For the core functionality, see @promptbook/core or install all packages with npm i ptbk

Rest of the documentation is common for entire promptbook ecosystem:

๐Ÿ“– The Book Whitepaper

Nowadays, the biggest challenge for most business applications isn't the raw capabilities of AI models. Large language models such as GPT-5.2 and Claude-4.5 are incredibly capable.

The main challenge lies in managing the context, providing rules and knowledge, and narrowing the personality.

In Promptbook, you can define your context using simple Books that are very explicit, easy to understand and write, reliable, and highly portable.

Paul Smith

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
RULE You are knowledgeable, professional, and detail-oriented.
TEAM You are part of the legal team of Paul Smith & Associรฉs, you discuss with {Emily White}, the head of the compliance department. {George Brown} is expert in corporate law and {Sophia Black} is expert in labor law.

Aspects of great AI agent

We have created a language called Book, which allows you to write AI agents in their native language and create your own AI persona. Book provides a guide to define all the traits and commitments.

You can look at it as "prompting" (or writing a system message), but decorated by commitments.

Commitments are special syntax elements that define contracts between you and the AI agent. They are transformed by Promptbook Engine into low-level parameters like which model to use, its temperature, system message, RAG index, MCP servers, and many other parameters. For some commitments (for example RULE commitment) Promptbook Engine can even create adversary agents and extra checks to enforce the rules.

Persona commitment

Personas define the character of your AI persona, its role, and how it should interact with users. It sets the tone and style of communication.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.

Knowledge commitment

Knowledge Commitment allows you to provide specific information, facts, or context that the AI should be aware of when responding.

This can include domain-specific knowledge, company policies, or any other relevant information.

Promptbook Engine will automatically enforce this knowledge during interactions. When the knowledge is short enough, it will be included in the prompt. When it is too long, it will be stored in vector databases and RAG retrieved when needed. But you don't need to care about it.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
You are knowledgeable, professional, and detail-oriented.

KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx

Rule commitment

Rules will enforce specific behaviors or constraints on the AI's responses. This can include ethical guidelines, communication styles, or any other rules you want the AI to follow.

Depending on rule strictness, Promptbook will either propagate it to the prompt or use other techniques, like adversary agent, to enforce it.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
You are knowledgeable, professional, and detail-oriented.

RULE Always ensure compliance with laws and regulations.
RULE Never provide legal advice outside your area of expertise.
RULE Never provide legal advice about criminal law.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx

Team commitment

Team commitment allows you to define the team structure and advisory fellow members the AI can consult with. This allows the AI to simulate collaboration and consultation with other experts, enhancing the quality of its responses.

Paul Smith & Associรฉs

PERSONA You are a company lawyer.
Your job is to provide legal advice and support to the company and its employees.
You are knowledgeable, professional, and detail-oriented.

RULE Always ensure compliance with laws and regulations.
RULE Never provide legal advice outside your area of expertise.
RULE Never provide legal advice about criminal law.
KNOWLEDGE https://company.com/company-policies.pdf
KNOWLEDGE https://company.com/internal-documents/employee-handbook.docx
TEAM You are part of the legal team of Paul Smith & Associรฉs, you discuss with {Emily White}, the head of the compliance department. {George Brown} is expert in corporate law and {Sophia Black} is expert in labor law.

Promptbook Ecosystem

!!!@@@

Promptbook Server

!!!@@@

Promptbook Engine

!!!@@@

๐Ÿ’œ The Promptbook Project

Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:

ProjectAbout
Agents Server Place where you "AI agents live". It allows to create, manage, deploy, and interact with AI agents created in Book language.
Book language Human-friendly, high-level language that abstracts away low-level details of AI. It allows to focus on personality, behavior, knowledge, and rules of AI agents rather than on models, parameters, and prompt engineering. There is also a plugin for VSCode to support .book file extension
Promptbook Engine Promptbook engine can run AI agents based on Book language. It is released as multiple NPM packages and Promptbook Agent Server as Docker Package Agent Server is based on Promptbook Engine.

๐ŸŒ Community & Social Media

Join our growing community of developers and users:

PlatformDescription
๐Ÿ’ฌ DiscordJoin our active developer community for discussions and support
๐Ÿ—ฃ๏ธ GitHub DiscussionsTechnical discussions, feature requests, and community Q&A
๐Ÿ‘” LinkedInProfessional updates and industry insights
๐Ÿ“ฑ FacebookGeneral announcements and community engagement
๐Ÿ”— ptbk.ioOfficial landing page with project information

๐Ÿ–ผ๏ธ Product & Brand Channels

Promptbook.studio

๐Ÿ“ธ Instagram @promptbook.studioVisual updates, UI showcases, and design inspiration

๐Ÿ“š Documentation

See detailed guides and API reference in the docs or online.

๐Ÿ”’ Security

For information on reporting security vulnerabilities, see our Security Policy.

๐Ÿ“ฆ Packages (for developers)

This library is divided into several packages, all are published from single monorepo. You can install all of them at once:

npm i ptbk

Or you can install them separately:

โญ Marked packages are worth to try first

๐Ÿ“š Dictionary

The following glossary is used to clarify certain concepts:

General LLM / AI terms

  • Prompt drift is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
  • Pipeline, workflow scenario or chain is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
  • Fine-tuning is a process where a pre-trained AI model is further trained on a specific dataset to improve its performance on a specific task.
  • Zero-shot learning is a machine learning paradigm where a model is trained to perform a task without any labeled examples. Instead, the model is provided with a description of the task and is expected to generate the correct output.
  • Few-shot learning is a machine learning paradigm where a model is trained to perform a task with only a few labeled examples. This is in contrast to traditional machine learning, where models are trained on large datasets.
  • Meta-learning is a machine learning paradigm where a model is trained on a variety of tasks and is able to learn new tasks with minimal additional training. This is achieved by learning a set of meta-parameters that can be quickly adapted to new tasks.
  • Retrieval-augmented generation is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
  • Longtail refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.

Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook

๐Ÿ’ฏ Core concepts

Advanced concepts

Data & Knowledge ManagementPipeline Control
Language & Output ControlAdvanced Generation

๐Ÿ” View more concepts

๐Ÿš‚ Promptbook Engine

Schema of Promptbook Engine

โž•โž– When to use Promptbook?

โž• When to use

  • When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
  • When you want to separate code from text prompts
  • When you want to describe complex prompt pipelines and don't want to do it in the code
  • When you want to orchestrate multiple prompts together
  • When you want to reuse parts of prompts in multiple places
  • When you want to version your prompts and test multiple versions
  • When you want to log the execution of prompts and backtrace the issues

See more

โž– When not to use

  • When you have already implemented single simple prompt and it works fine for your job
  • When OpenAI Assistant (GPTs) is enough for you
  • When you need streaming (this may be implemented in the future, see discussion).
  • When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
  • When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
  • When you need to use recursion (see the discussion)

See more

๐Ÿœ Known issues

๐Ÿงผ Intentionally not implemented features

โ” FAQ

If you have a question start a discussion, open an issue or write me an email.

๐Ÿ“… Changelog

See CHANGELOG.md

๐Ÿ“œ License

This project is licensed under BUSL 1.1.

๐Ÿค Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

You can also โญ star the project, follow us on GitHub or various other social networks.We are open to pull requests, feedback, and suggestions.

๐Ÿ†˜ Support & Community

Need help with Book language? We're here for you!

We welcome contributions and feedback to make Book language better for everyone!

Keywords

ai

FAQs

Package last updated on 06 Jan 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts