Promptbook
Supercharge your use of large language models
๐ฆ Package @promptbook/azure-openai
To install this package, run:
npm i ptbk
npm i @promptbook/azure-openai
@promptbook/azure-openai
integrates Azure OpenAI API with Promptbook. It allows to execute Promptbooks with Azure OpenAI GPT models.
Note: This is similar to @promptbook/openai but more useful for Enterprise customers who use Azure OpenAI to ensure strict data privacy and compliance.
๐งก Usage
import { createPipelineExecutor, assertsExecutionSuccessful } from '@promptbook/core';
import { createCollectionFromDirectory } from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { AzureOpenAiExecutionTools } from '@promptbook/azure-openai';
const collection = await createCollectionFromDirectory('./promptbook-collection');
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);
const tools = {
llm: new AzureOpenAiExecutionTools(
{
isVerbose: true,
resourceName: process.env.AZUREOPENAI_RESOURCE_NAME,
deploymentName: process.env.AZUREOPENAI_DEPLOYMENT_NAME,
apiKey: process.env.AZUREOPENAI_API_KEY,
},
),
script: [
new JavascriptExecutionTools(),
],
};
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
const inputParameters = { word: 'crocodile' };
const result = await pipelineExecutor(inputParameters);
assertsExecutionSuccessful(result);
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
๐ Usage of multiple LLM providers
You can use multiple LLM providers in one Promptbook execution. The best model will be chosen automatically according to the prompt and the model's capabilities.
import { createPipelineExecutor, assertsExecutionSuccessful } from '@promptbook/core';
import { createCollectionFromDirectory } from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { AzureOpenAiExecutionTools } from '@promptbook/azure-openai';
import { OpenAiExecutionTools } from '@promptbook/openai';
import { AnthropicClaudeExecutionTools } from '@promptbook/anthropic-claude';
const collection = await createCollectionFromDirectory('./promptbook-collection');
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);
const tools = {
llm: [
new AzureOpenAiExecutionTools(
{
resourceName: process.env.AZUREOPENAI_RESOURCE_NAME,
deploymentName: process.env.AZUREOPENAI_DEPLOYMENT_NAME,
apiKey: process.env.AZUREOPENAI_API_KEY,
},
),
new OpenAiExecutionTools(
{
apiKey: process.env.OPENAI_API_KEY,
},
),
new AnthropicClaudeExecutionTools(
{
apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
},
),
],
script: [
new JavascriptExecutionTools(),
],
};
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
const inputParameters = { word: 'snake' };
const result = await pipelineExecutor(inputParameters);
assertsExecutionSuccessful(result);
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
๐ Integration with other models
See the other models available in the Promptbook package:
Rest of the documentation is common for entire promptbook ecosystem:
๐ค The Promptbook Whitepaper
When you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how it is integrated. Whether it's the direct calling of a REST API, using the SDK, hardcoding the prompt in the source code, or importing a text file, the process remains the same.
If you need something more advanced or want to extend the capabilities of LLMs, you generally have three ways to proceed:
- Fine-tune the model to your specifications or even train your own.
- Prompt-engineer the prompt to the best shape you can achieve.
- Use multiple prompts in a pipeline to get the best result.
In any of these situations, but especially in (3), the Promptbook library can make your life easier and make orchestraror for your prompts.
- Separation of concerns between prompt engineer and programmer; between code files and prompt files; and between prompts and their execution logic.
- Set up a common format for prompts that is interchangeable between projects and language/technology stacks.
- Preprocessing and cleaning the input data from the user.
- Use default values - Jokers to bypass some parts of the pipeline.
- Expect some specific output from the model.
- Retry mismatched outputs.
- Combine multiple models together.
- Interactive User interaction with the model and the user.
- Leverage external sources (like ChatGPT plugins or OpenAI's GPTs).
- Simplify your code to be DRY and not repeat all the boilerplate code for each prompt.
- Versioning of promptbooks
- Reuse parts of promptbooks in/between projects.
- Run the LLM optimally in parallel, with the best cost/quality ratio or speed/quality ratio.
- Execution report to see what happened during the execution.
- Logging the results of the promptbooks.
- (Not ready yet) Caching calls to LLMs to save money and time.
- (Not ready yet) Extend one prompt book from another one.
- (Not ready yet) Leverage the streaming to make super cool UI/UX.
- (Not ready yet) A/B testing to determine which prompt works best for the job.
๐ง Promptbook (for prompt-engeneers)
Prompt book markdown file (or .ptbk.md
file) is document that describes a pipeline - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
- Multiple pipelines forms a collection which will handle core know-how of your LLM application.
- Theese pipelines are designed such as they can be written by non-programmers.
Sample:
File write-website-content.ptbk.md
:
๐ Create website content
Instructions for creating web page content.
- PIPELINE URL https://promptbook.studio/webgpt/write-website-content.ptbk.md
- PROMPTBOOK VERSION 0.0.1
- INPUTโฏโฏPARAM
{rawTitle}
Automatically suggested a site name or empty text - INPUTโฏโฏPARAM
{rawAssigment}
Automatically generated site entry from image recognition - OUTPUTโฏPARAM
{websiteContent}
Web content - OUTPUTโฏPARAM
{keywords}
Keywords
๐ค Specifying the assigment
What is your web about?
{rawAssigment}
-> {assigment}
Website assignment and specification
โจ Improving the title
- MODEL VARIANT Chat
- MODEL NAME
gpt-4
- POSTPROCESSING
unwrapResult
As an experienced marketing specialist, you have been entrusted with improving the name of your client's business.
A suggested name from a client:
"{rawTitle}"
Assignment from customer:
> {assigment}
## Instructions:
- Write only one name suggestion
- The name will be used on the website, business cards, visuals, etc.
-> {enhancedTitle}
Enhanced title
๐ค Website title approval
Is the title for your website okay?
{enhancedTitle}
-> {title}
Title for the website
๐ฐ Cunning subtitle
- MODEL VARIANT Chat
- MODEL NAME
gpt-4
- POSTPROCESSING
unwrapResult
As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page.
A website assignment from a customer:
> {assigment}
## Instructions:
- Write only one name suggestion
- Claim will be used on website, business cards, visuals, etc.
- Claim should be punchy, funny, original
-> {claim}
Claim for the web
๐ฆ Keyword analysis
- MODEL VARIANT Chat
- MODEL NAME
gpt-4
As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}".
Website assignment from the customer:
> {assigment}
## Instructions:
- Write a list of keywords
- Keywords are in basic form
## Example:
- Ice cream
- Olomouc
- Quality
- Family
- Tradition
- Italy
- Craft
-> {keywords}
Keywords
๐ Combine the beginning
# {title}
> {claim}
-> {contentBeginning}
Beginning of web content
๐ Write the content
- MODEL VARIANT Completion
- MODEL NAME
gpt-3.5-turbo-instruct
As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}.
A website assignment from a customer:
> {assigment}
## Instructions:
- Text formatting is in Markdown
- Be concise and to the point
- Use keywords, but they should be naturally in the text
- This is the complete content of the page, so don't forget all the important information and elements the page should contain
- Use headings, bullets, text formatting
## Keywords:
{keywords}
## Web Content:
{contentBeginning}
-> {contentBody}
Middle of the web content
๐ Combine the content
{contentBeginning}
{contentBody}
-> {websiteContent}
Following is the scheme how the promptbook above is executed:
%% ๐ฎ Tip: Open this on GitHub or in the VSCode website to see the Mermaid graph visually
flowchart LR
subgraph "๐ Create website content"
direction TB
input((Input)):::input
templateSpecifyingTheAssigment(๐ค Specifying the assigment)
input--"{rawAssigment}"-->templateSpecifyingTheAssigment
templateImprovingTheTitle(โจ Improving the title)
input--"{rawTitle}"-->templateImprovingTheTitle
templateSpecifyingTheAssigment--"{assigment}"-->templateImprovingTheTitle
templateWebsiteTitleApproval(๐ค Website title approval)
templateImprovingTheTitle--"{enhancedTitle}"-->templateWebsiteTitleApproval
templateCunningSubtitle(๐ฐ Cunning subtitle)
templateWebsiteTitleApproval--"{title}"-->templateCunningSubtitle
templateSpecifyingTheAssigment--"{assigment}"-->templateCunningSubtitle
templateKeywordAnalysis(๐ฆ Keyword analysis)
templateWebsiteTitleApproval--"{title}"-->templateKeywordAnalysis
templateSpecifyingTheAssigment--"{assigment}"-->templateKeywordAnalysis
templateCombineTheBeginning(๐ Combine the beginning)
templateWebsiteTitleApproval--"{title}"-->templateCombineTheBeginning
templateCunningSubtitle--"{claim}"-->templateCombineTheBeginning
templateWriteTheContent(๐ Write the content)
templateWebsiteTitleApproval--"{title}"-->templateWriteTheContent
templateSpecifyingTheAssigment--"{assigment}"-->templateWriteTheContent
templateKeywordAnalysis--"{keywords}"-->templateWriteTheContent
templateCombineTheBeginning--"{contentBeginning}"-->templateWriteTheContent
templateCombineTheContent(๐ Combine the content)
templateCombineTheBeginning--"{contentBeginning}"-->templateCombineTheContent
templateWriteTheContent--"{contentBody}"-->templateCombineTheContent
templateCombineTheContent--"{websiteContent}"-->output
output((Output)):::output
classDef input color: grey;
classDef output color: grey;
end;
Note: We are using postprocessing functions like unwrapResult
that can be used to postprocess the result.
๐ฆ Packages
This library is divided into several packages, all are published from single monorepo.
You can install all of them at once:
npm i ptbk
Or you can install them separately:
โญ Marked packages are worth to try first
๐ Dictionary
The following glossary is used to clarify certain concepts:
Core concepts
Advanced concepts
๐ Usage in Typescript / Javascript
โโ When to use Promptbook?
โ When to use
- When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
- When you want to separate code from text prompts
- When you want to describe complex prompt pipelines and don't want to do it in the code
- When you want to orchestrate multiple prompts together
- When you want to reuse parts of prompts in multiple places
- When you want to version your prompts and test multiple versions
- When you want to log the execution of prompts and backtrace the issues
โ When not to use
- When you are writing just a simple chatbot without any extra logic, just system messages
๐ Known issues
๐งผ Intentionally not implemented features
โ FAQ
If you have a question start a discussion, open an issue or write me an email.
Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?
Different levels of abstraction. OpenAI library is for direct use of OpenAI API. This library is for a higher level of abstraction. It is for creating prompt templates and promptbooks that are independent of the underlying library, LLM model, or even LLM provider.
How is it different from the Langchain library?
Langchain is primarily aimed at ML developers working in Python. This library is for developers working in javascript/typescript and creating applications for end users.
We are considering creating a bridge/converter between these two libraries.
Promptbooks vs. OpenAI`s GPTs
GPTs are chat assistants that can be assigned to specific tasks and materials. But they are still chat assistants. Promptbooks are a way to orchestrate many more predefined tasks to have much tighter control over the process. Promptbooks are not a good technology for creating human-like chatbots, GPTs are not a good technology for creating outputs with specific requirements.
Where should I store my promptbooks?
If you use raw SDKs, you just put prompts in the sourcecode, mixed in with typescript, javascript, python or whatever programming language you use.
If you use promptbooks, you can store them in several places, each with its own advantages and disadvantages:
-
As source code, typically git-committed. In this case you can use the versioning system and the promptbooks will be tightly coupled with the version of the application. You still get the power of promptbooks, as you separate the concerns of the prompt-engineer and the programmer.
-
As data in a database In this case, promptbooks are like posts / articles on the blog. They can be modified independently of the application. You don't need to redeploy the application to change the promptbooks. You can have multiple versions of promptbooks for each user. You can have a web interface for non-programmers to create and modify promptbooks. But you lose the versioning system and you still have to consider the interface between the promptbooks and the application (= input and output parameters).
-
In a configuration in environment variables. This is a good way to store promptbooks if you have an application with multiple deployments and you want to have different but simple promptbooks for each deployment and you don't need to change them often.
What should I do when I need same promptbook in multiple human languages?
A single promptbook can be written for several (human) languages at once. However, we recommend that you have separate promptbooks for each language.
In large language models, you will get better results if you have prompts in the same language as the user input.
The best way to manage this is to have suffixed promptbooks like write-website-content.en.ptbk.md
and write-website-content.cs.ptbk.md
for each supported language.
โ Changelog
See CHANGELOG.md
๐ License
Promptbook by Pavol Hejnรฝ is licensed under CC BY 4.0
๐ฏ Todos
See TODO.md
๐๏ธ Contributing
I am open to pull requests, feedback, and suggestions. Or if you like this utility, you can โ buy me a coffee or donate via cryptocurrencies.
You can also โญ star the promptbook package, follow me on GitHub or various other social networks.