๐Ÿš€ DAY 4 OF LAUNCH WEEK: Introducing GitHub Actions Scanning Support.Learn more โ†’
Socket
Book a DemoInstallSign in
Socket

@promptbook/core

Package Overview
Dependencies
Maintainers
1
Versions
770
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@promptbook/core

Promptbook: Turn your company's scattered knowledge into AI ready books

latest
Source
npmnpm
Version
0.103.0-8
Version published
Weekly downloads
4.1K
-11.98%
Maintainers
1
Weekly downloads
ย 
Created
Source

โœจ Promptbook: AI Agents

Turn your company's scattered knowledge into AI ready Books

NPM Version of Promptbook logo - cube with letters P and B Promptbook Quality of package Promptbook logo - cube with letters P and B Promptbook Known Vulnerabilities ๐Ÿงช Test Books ๐Ÿงช Test build ๐Ÿงช Lint ๐Ÿงช Spell check ๐Ÿงช Test types Issues

๐ŸŒŸ New Features

  • ๐Ÿš€ GPT-5 Support - Now includes OpenAI's most advanced language model with unprecedented reasoning capabilities and 200K context window
  • ๐Ÿ’ก VS Code support for .book files with syntax highlighting and IntelliSense
  • ๐Ÿณ Official Docker image (hejny/promptbook) for seamless containerized usage
  • ๐Ÿ”ฅ Native support for OpenAI o3-mini, GPT-4 and other leading LLMs
  • ๐Ÿ” DeepSeek integration for advanced knowledge search
โš  Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.

๐Ÿ“ฆ Package @promptbook/core

To install this package, run:

# Install entire promptbook ecosystem
npm i ptbk

# Install just this package to save space
npm install @promptbook/core

The core package contains the fundamental logic and infrastructure for Promptbook. It provides the essential building blocks for creating, parsing, validating, and executing promptbooks, along with comprehensive error handling, LLM provider integrations, and execution utilities.

๐ŸŽฏ Purpose and Motivation

The core package serves as the foundation of the Promptbook ecosystem. It abstracts away the complexity of working with different LLM providers, provides a unified interface for prompt execution, and handles all the intricate details of pipeline management, parameter validation, and result processing.

๐Ÿ”ง High-Level Functionality

This package orchestrates the entire promptbook execution lifecycle:

  • Pipeline Management: Parse, validate, and compile promptbook definitions
  • Execution Engine: Create and manage pipeline executors with comprehensive error handling
  • LLM Integration: Unified interface for multiple LLM providers (OpenAI, Anthropic, Google, etc.)
  • Parameter Processing: Template parameter substitution and validation
  • Knowledge Management: Handle knowledge sources and scraping
  • Storage Abstraction: Flexible storage backends for caching and persistence
  • Format Support: Parse and validate various data formats (JSON, CSV, XML)

โœจ Key Features

  • ๐Ÿš€ Universal Pipeline Executor - Execute promptbooks with any supported LLM provider
  • ๐Ÿ”„ Multi-Provider Support - Seamlessly switch between OpenAI, Anthropic, Google, and other providers
  • ๐Ÿ“Š Comprehensive Validation - Validate promptbooks, parameters, and execution results
  • ๐ŸŽฏ Expectation Checking - Built-in validation for output format, length, and content expectations
  • ๐Ÿง  Knowledge Integration - Scrape and process knowledge from various sources
  • ๐Ÿ’พ Flexible Storage - Memory, filesystem, and custom storage backends
  • ๐Ÿ”ง Error Handling - Detailed error types for debugging and monitoring
  • ๐Ÿ“ˆ Usage Tracking - Monitor token usage, costs, and performance metrics
  • ๐ŸŽจ Format Parsers - Support for JSON, CSV, XML, and text formats
  • ๐Ÿ”€ Pipeline Migration - Upgrade and migrate pipeline definitions

๐Ÿ“ฆ Exported Entities

Version Information

  • BOOK_LANGUAGE_VERSION - Current book language version
  • PROMPTBOOK_ENGINE_VERSION - Current engine version

Agent and Book Management

  • createAgentModelRequirements - Create model requirements for agents
  • parseAgentSource - Parse agent source code
  • isValidBook - Validate book format
  • validateBook - Comprehensive book validation
  • DEFAULT_BOOK - Default book template

Commitment System

  • createEmptyAgentModelRequirements - Create empty model requirements
  • createBasicAgentModelRequirements - Create basic model requirements
  • NotYetImplementedCommitmentDefinition - Placeholder for future commitments
  • getCommitmentDefinition - Get specific commitment definition
  • getAllCommitmentDefinitions - Get all available commitment definitions
  • getAllCommitmentTypes - Get all commitment types
  • isCommitmentSupported - Check if commitment is supported

Collection Management

  • collectionToJson - Convert collection to JSON
  • createCollectionFromJson - Create collection from JSON data
  • createCollectionFromPromise - Create collection from async source
  • createCollectionFromUrl - Create collection from URL
  • createSubcollection - Create filtered subcollection

Configuration Constants

  • NAME - Project name
  • ADMIN_EMAIL - Administrator email
  • ADMIN_GITHUB_NAME - GitHub username
  • CLAIM - Project claim/tagline
  • DEFAULT_BOOK_TITLE - Default book title
  • DEFAULT_TASK_TITLE - Default task title
  • DEFAULT_PROMPT_TASK_TITLE - Default prompt task title
  • DEFAULT_BOOK_OUTPUT_PARAMETER_NAME - Default output parameter name
  • DEFAULT_MAX_FILE_SIZE - Maximum file size limit
  • BIG_DATASET_TRESHOLD - Threshold for large datasets
  • FAILED_VALUE_PLACEHOLDER - Placeholder for failed values
  • PENDING_VALUE_PLACEHOLDER - Placeholder for pending values
  • MAX_FILENAME_LENGTH - Maximum filename length
  • DEFAULT_INTERMEDIATE_FILES_STRATEGY - Strategy for intermediate files
  • DEFAULT_MAX_PARALLEL_COUNT - Maximum parallel executions
  • DEFAULT_MAX_EXECUTION_ATTEMPTS - Maximum execution attempts
  • DEFAULT_MAX_KNOWLEDGE_SOURCES_SCRAPING_DEPTH - Knowledge scraping depth limit
  • DEFAULT_MAX_KNOWLEDGE_SOURCES_SCRAPING_TOTAL - Knowledge scraping total limit
  • DEFAULT_BOOKS_DIRNAME - Default books directory name
  • DEFAULT_DOWNLOAD_CACHE_DIRNAME - Default download cache directory
  • DEFAULT_EXECUTION_CACHE_DIRNAME - Default execution cache directory
  • DEFAULT_SCRAPE_CACHE_DIRNAME - Default scrape cache directory
  • CLI_APP_ID - CLI application identifier
  • PLAYGROUND_APP_ID - Playground application identifier
  • DEFAULT_PIPELINE_COLLECTION_BASE_FILENAME - Default collection filename
  • DEFAULT_REMOTE_SERVER_URL - Default remote server URL
  • DEFAULT_CSV_SETTINGS - Default CSV parsing settings
  • DEFAULT_IS_VERBOSE - Default verbosity setting
  • SET_IS_VERBOSE - Verbosity setter
  • DEFAULT_IS_AUTO_INSTALLED - Default auto-install setting
  • DEFAULT_TASK_SIMULATED_DURATION_MS - Default task simulation duration
  • DEFAULT_GET_PIPELINE_COLLECTION_FUNCTION_NAME - Default collection function name
  • DEFAULT_MAX_REQUESTS_PER_MINUTE - Rate limiting configuration
  • API_REQUEST_TIMEOUT - API request timeout
  • PROMPTBOOK_LOGO_URL - Official logo URL

Model and Provider Constants

  • MODEL_TRUST_LEVELS - Trust levels for different models
  • MODEL_ORDERS - Ordering preferences for models
  • ORDER_OF_PIPELINE_JSON - JSON property ordering
  • RESERVED_PARAMETER_NAMES - Reserved parameter names

Pipeline Processing

  • compilePipeline - Compile pipeline from source
  • parsePipeline - Parse pipeline definition
  • pipelineJsonToString - Convert pipeline JSON to string
  • prettifyPipelineString - Format pipeline string
  • extractParameterNamesFromTask - Extract parameter names
  • validatePipeline - Validate pipeline structure

Dialog and Interface Tools

  • CallbackInterfaceTools - Callback-based interface tools
  • CallbackInterfaceToolsOptions - Options for callback tools (type)

Error Handling

  • BoilerplateError - Base error class
  • PROMPTBOOK_ERRORS - All error types registry
  • AbstractFormatError - Abstract format validation error
  • AuthenticationError - Authentication failure error
  • CollectionError - Collection-related error
  • EnvironmentMismatchError - Environment compatibility error
  • ExpectError - Expectation validation error
  • KnowledgeScrapeError - Knowledge scraping error
  • LimitReachedError - Resource limit error
  • MissingToolsError - Missing tools error
  • NotFoundError - Resource not found error
  • NotYetImplementedError - Feature not implemented error
  • ParseError - Parsing error
  • PipelineExecutionError - Pipeline execution error
  • PipelineLogicError - Pipeline logic error
  • PipelineUrlError - Pipeline URL error
  • PromptbookFetchError - Fetch operation error
  • UnexpectedError - Unexpected error
  • WrappedError - Wrapped error container

Execution Engine

  • createPipelineExecutor - Create pipeline executor
  • computeCosineSimilarity - Compute cosine similarity for embeddings
  • embeddingVectorToString - Convert embedding vector to string
  • executionReportJsonToString - Convert execution report to string
  • ExecutionReportStringOptions - Report formatting options (type)
  • ExecutionReportStringOptionsDefaults - Default report options

Usage and Metrics

  • addUsage - Add usage metrics
  • isPassingExpectations - Check if expectations are met
  • ZERO_VALUE - Zero usage value constant
  • UNCERTAIN_ZERO_VALUE - Uncertain zero value constant
  • ZERO_USAGE - Zero usage object
  • UNCERTAIN_USAGE - Uncertain usage object
  • usageToHuman - Convert usage to human-readable format
  • usageToWorktime - Convert usage to work time estimate

Format Parsers

  • CsvFormatError - CSV format error
  • CsvFormatParser - CSV format parser
  • MANDATORY_CSV_SETTINGS - Required CSV settings
  • TextFormatParser - Text format parser

Form Factor Definitions

  • BoilerplateFormfactorDefinition - Boilerplate form factor
  • ChatbotFormfactorDefinition - Chatbot form factor
  • CompletionFormfactorDefinition - Completion form factor
  • GeneratorFormfactorDefinition - Generator form factor
  • GenericFormfactorDefinition - Generic form factor
  • ImageGeneratorFormfactorDefinition - Image generator form factor
  • FORMFACTOR_DEFINITIONS - All form factor definitions
  • MatcherFormfactorDefinition - Matcher form factor
  • SheetsFormfactorDefinition - Sheets form factor
  • TranslatorFormfactorDefinition - Translator form factor

LLM Provider Integration

  • filterModels - Filter available models
  • $llmToolsMetadataRegister - LLM tools metadata registry
  • $llmToolsRegister - LLM tools registry
  • createLlmToolsFromConfiguration - Create tools from config
  • cacheLlmTools - Cache LLM tools
  • countUsage - Count total usage
  • limitTotalUsage - Limit total usage
  • joinLlmExecutionTools - Join multiple LLM tools
  • MultipleLlmExecutionTools - Multiple LLM tools container

Provider Registrations

  • _AnthropicClaudeMetadataRegistration - Anthropic Claude registration
  • _AzureOpenAiMetadataRegistration - Azure OpenAI registration
  • _DeepseekMetadataRegistration - Deepseek registration
  • _GoogleMetadataRegistration - Google registration
  • _OllamaMetadataRegistration - Ollama registration
  • _OpenAiMetadataRegistration - OpenAI registration
  • _OpenAiAssistantMetadataRegistration - OpenAI Assistant registration
  • _OpenAiCompatibleMetadataRegistration - OpenAI Compatible registration

Pipeline Management

  • migratePipeline - Migrate pipeline to newer version
  • preparePersona - Prepare persona for execution
  • book - Book notation utilities
  • isValidPipelineString - Validate pipeline string
  • GENERIC_PIPELINE_INTERFACE - Generic pipeline interface
  • getPipelineInterface - Get pipeline interface
  • isPipelineImplementingInterface - Check interface implementation
  • isPipelineInterfacesEqual - Compare pipeline interfaces
  • EXPECTATION_UNITS - Units for expectations
  • validatePipelineString - Validate pipeline string format

Pipeline Preparation

  • isPipelinePrepared - Check if pipeline is prepared
  • preparePipeline - Prepare pipeline for execution
  • unpreparePipeline - Unprepare pipeline

Remote Server Integration

  • identificationToPromptbookToken - Convert ID to token
  • promptbookTokenToIdentification - Convert token to ID

Knowledge Scraping

  • _BoilerplateScraperMetadataRegistration - Boilerplate scraper registration
  • prepareKnowledgePieces - Prepare knowledge pieces
  • $scrapersMetadataRegister - Scrapers metadata registry
  • $scrapersRegister - Scrapers registry
  • makeKnowledgeSourceHandler - Create knowledge source handler
  • promptbookFetch - Fetch with promptbook context
  • _LegacyDocumentScraperMetadataRegistration - Legacy document scraper
  • _DocumentScraperMetadataRegistration - Document scraper registration
  • _MarkdownScraperMetadataRegistration - Markdown scraper registration
  • _MarkitdownScraperMetadataRegistration - Markitdown scraper registration
  • _PdfScraperMetadataRegistration - PDF scraper registration
  • _WebsiteScraperMetadataRegistration - Website scraper registration

Storage Backends

  • BlackholeStorage - Blackhole storage (discards data)
  • MemoryStorage - In-memory storage
  • PrefixStorage - Prefixed storage wrapper

Type Definitions

  • MODEL_VARIANTS - Available model variants
  • NonTaskSectionTypes - Non-task section types
  • SectionTypes - All section types
  • TaskTypes - Task types

Server Configuration

  • REMOTE_SERVER_URLS - Remote server URLs

๐Ÿ’ก This package does not make sense on its own, look at all promptbook packages or just install all by npm i ptbk

Rest of the documentation is common for entire promptbook ecosystem:

๐Ÿ“– The Book Whitepaper

For most business applications nowadays, the biggest challenge isn't about the raw capabilities of AI models. Large language models like GPT-5 or Claude-4.1 are extremely capable.

The main challenge is to narrow it down, constrain it, set the proper context, rules, knowledge, and personality. There are a lot of tools which can do exactly this. On one side, there are no-code platforms which can launch your agent in seconds. On the other side, there are heavy frameworks like Langchain or Semantic Kernel, which can give you deep control.

Promptbook takes the best from both worlds. You are defining your AI behavior by simple books, which are very explicit. They are automatically enforced, but they are very easy to understand, very easy to write, and very reliable and portable.

Paul Smith & Associรฉs Book

Aspects of great AI agent

We have created a language called Book, which allows you to write AI agents in their native language and create your own AI persona. Book provides a guide to define all the traits and commitments.

You can look at it as prompting (or writing a system message), but decorated by commitments.

Persona commitment

Personas define the character of your AI persona, its role, and how it should interact with users. It sets the tone and style of communication.

Paul Smith & Associรฉs Book

Knowledge commitment

Knowledge Commitment allows you to provide specific information, facts, or context that the AI should be aware of when responding.

This can include domain-specific knowledge, company policies, or any other relevant information.

Promptbook Engine will automatically enforce this knowledge during interactions. When the knowledge is short enough, it will be included in the prompt. When it is too long, it will be stored in vector databases and RAG retrieved when needed. But you don't need to care about it.

Paul Smith & Associรฉs Book

Rule commitment

Rules will enforce specific behaviors or constraints on the AI's responses. This can include ethical guidelines, communication styles, or any other rules you want the AI to follow.

Depending on rule strictness, Promptbook will either propagate it to the prompt or use other techniques, like adversary agent, to enforce it.

Paul Smith & Associรฉs Book

Action commitment

Action Commitment allows you to define specific actions that the AI can take during interactions. This can include things like posting on a social media platform, sending emails, creating calendar events, or interacting with your internal systems.

Paul Smith & Associรฉs Book

Read more about the language

Where to use your AI agent in book

Books can be useful in various applications and scenarios. Here are some examples:

Chat apps:

Create your own chat shopping assistant and place it in your eShop. You will be able to answer customer questions, help them find products, and provide personalized recommendations. Everything is tightly controlled by the book you have written.

Reply Agent:

Create your own AI agent, which will look at your emails and reply to them. It can even create drafts for you to review before sending.

Coding Agent:

Do you love Vibecoding, but the AI code is not always aligned with your coding style and architecture, rules, security, etc.? Create your own coding agent to help enforce your specific coding standards and practices.

This can be integrated to almost any Vibecoding platform, like GitHub Copilot, Amazon CodeWhisperer, Cursor, Cline, Kilocode, Roocode,...

They will work the same as you are used to, but with your specific rules written in book.

Internal Expertise

Do you have an app written in TypeScript, Python, C#, Java, or any other language, and you are integrating the AI.

You can avoid struggle with choosing the best model, its settings like temperature, max tokens, etc., by writing a book agent and using it as your AI expertise.

Doesn't matter if you do automations, data analysis, customer support, sentiment analysis, classification, or any other task. Your AI agent will be tailored to your specific needs and requirements.

Even works in no-code platforms!

How to create your AI agent in book

Now you want to use it. There are several ways how to write your first book:

From scratch with help from Paul

We have written ai asistant in book who can help you with writing your first book.

Your AI twin

Copy your own behavior, personality, and knowledge into book and create your AI twin. It can help you with your work, personal life, or any other task.

AI persona workpool

Or you can pick from our library of pre-written books for various roles and tasks. You can find books for customer support, coding, marketing, sales, HR, legal, and many other roles.

๐Ÿš€ Get started

Take a look at the simple starter kit with books integrated into the Hello World sample applications:

๐Ÿ’œ The Promptbook Project

Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:

ProjectAbout
Book language Book is a human-understandable markup language for writing AI applications such as chatbots, knowledge bases, agents, avarars, translators, automations and more. There is also a plugin for VSCode to support .book file extension
Promptbook Engine Promptbook engine can run applications written in Book language. It is released as multiple NPM packages and Docker HUB
Promptbook Studio Promptbook.studio is a web-based editor and runner for book applications. It is still in the experimental MVP stage.

Hello world examples:

๐ŸŒ Community & Social Media

Join our growing community of developers and users:

PlatformDescription
๐Ÿ’ฌ DiscordJoin our active developer community for discussions and support
๐Ÿ—ฃ๏ธ GitHub DiscussionsTechnical discussions, feature requests, and community Q&A
๐Ÿ‘” LinkedInProfessional updates and industry insights
๐Ÿ“ฑ FacebookGeneral announcements and community engagement
๐Ÿ”— ptbk.ioOfficial landing page with project information

๐Ÿ–ผ๏ธ Product & Brand Channels

Promptbook.studio

๐Ÿ“ธ Instagram @promptbook.studioVisual updates, UI showcases, and design inspiration

๐Ÿ“š Documentation

See detailed guides and API reference in the docs or online.

๐Ÿ”’ Security

For information on reporting security vulnerabilities, see our Security Policy.

๐Ÿ“ฆ Packages (for developers)

This library is divided into several packages, all are published from single monorepo. You can install all of them at once:

npm i ptbk

Or you can install them separately:

โญ Marked packages are worth to try first

๐Ÿ“š Dictionary

The following glossary is used to clarify certain concepts:

General LLM / AI terms

  • Prompt drift is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
  • Pipeline, workflow scenario or chain is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
  • Fine-tuning is a process where a pre-trained AI model is further trained on a specific dataset to improve its performance on a specific task.
  • Zero-shot learning is a machine learning paradigm where a model is trained to perform a task without any labeled examples. Instead, the model is provided with a description of the task and is expected to generate the correct output.
  • Few-shot learning is a machine learning paradigm where a model is trained to perform a task with only a few labeled examples. This is in contrast to traditional machine learning, where models are trained on large datasets.
  • Meta-learning is a machine learning paradigm where a model is trained on a variety of tasks and is able to learn new tasks with minimal additional training. This is achieved by learning a set of meta-parameters that can be quickly adapted to new tasks.
  • Retrieval-augmented generation is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
  • Longtail refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.

Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook

๐Ÿ’ฏ Core concepts

Advanced concepts

Data & Knowledge ManagementPipeline Control
Language & Output ControlAdvanced Generation

๐Ÿ” View more concepts

๐Ÿš‚ Promptbook Engine

Schema of Promptbook Engine

โž•โž– When to use Promptbook?

โž• When to use

  • When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
  • When you want to separate code from text prompts
  • When you want to describe complex prompt pipelines and don't want to do it in the code
  • When you want to orchestrate multiple prompts together
  • When you want to reuse parts of prompts in multiple places
  • When you want to version your prompts and test multiple versions
  • When you want to log the execution of prompts and backtrace the issues

See more

โž– When not to use

  • When you have already implemented single simple prompt and it works fine for your job
  • When OpenAI Assistant (GPTs) is enough for you
  • When you need streaming (this may be implemented in the future, see discussion).
  • When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
  • When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
  • When you need to use recursion (see the discussion)

See more

๐Ÿœ Known issues

๐Ÿงผ Intentionally not implemented features

โ” FAQ

If you have a question start a discussion, open an issue or write me an email.

๐Ÿ“… Changelog

See CHANGELOG.md

๐Ÿ“œ License

This project is licensed under BUSL 1.1.

๐Ÿค Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

You can also โญ star the project, follow us on GitHub or various other social networks.We are open to pull requests, feedback, and suggestions.

๐Ÿ†˜ Support & Community

Need help with Book language? We're here for you!

We welcome contributions and feedback to make Book language better for everyone!

Keywords

ai

FAQs

Package last updated on 23 Oct 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts