Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@promptbook/anthropic-claude

Package Overview
Dependencies
Maintainers
0
Versions
263
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@promptbook/anthropic-claude

Build responsible, controlled and transparent applications on top of LLM models!

  • 0.69.5
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
727
increased by58.04%
Maintainers
0
Weekly downloads
 
Created
Source

Promptbook logo - cube with letters P and B Promptbook

Build responsible, controlled and transparent applications on top of LLM models!

NPM Version of Promptbook logo - cube with letters P and B Promptbook Quality of package Promptbook logo - cube with letters P and B Promptbook Known Vulnerabilities Issues

✨ New Features

📦 Package @promptbook/anthropic-claude

To install this package, run:

# Install entire promptbook ecosystem
npm i ptbk

# Install just this package to save space
npm install @promptbook/anthropic-claude

@promptbook/anthropic-claude integrates Anthropic's Claude API with Promptbook. It allows to execute Promptbooks with OpenAI Claude 2 and 3 models.

🧡 Usage

import { createPipelineExecutor, createCollectionFromDirectory, assertsExecutionSuccessful } from '@promptbook/core';
import { createCollectionFromDirectory } from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { AnthropicClaudeExecutionTools } from '@promptbook/anthropic-claude';

// ▶ Create whole pipeline collection
const collection = await createCollectionFromDirectory('./promptbook-collection');

// ▶ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);

// ▶ Prepare tools
const tools = {
    llm: new AnthropicClaudeExecutionTools(
        //            <- TODO: [🧱] Implement in a functional (not new Class) way
        {
            isVerbose: true,
            apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
        },
    ),
    script: [
        new JavascriptExecutionTools(),
        //            <- TODO: [🧱] Implement in a functional (not new Class) way
    ],
};

// ▶ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });

// ▶ Prepare input parameters
const inputParameters = { word: 'rabbit' };

// 🚀▶ Execute the Pipeline
const result = await pipelineExecutor(inputParameters);

// ▶ Fail if the execution was not successful
assertsExecutionSuccessful(result);

// ▶ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);

🧙‍♂️ Connect to LLM providers automatically

You can just use createLlmToolsFromEnv function to create LLM tools from environment variables like ANTHROPIC_CLAUDE_API_KEY and OPENAI_API_KEY automatically.

import { createPipelineExecutor, createCollectionFromDirectory, assertsExecutionSuccessful } from '@promptbook/core';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { createLlmToolsFromEnv } from '@promptbook/node';

// ▶ Create whole pipeline collection
const collection = await createCollectionFromDirectory('./promptbook-collection');

// ▶ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);

// ▶ Prepare multiple tools
const tools = {
    // Note: 🧙‍♂️ Just call `createLlmToolsFromEnv` to automatically connect to all configured providers
    llm: createLlmToolsFromEnv(),
    script: [
        new JavascriptExecutionTools(),
        //            <- TODO: [🧱] Implement in a functional (not new Class) way
    ],
};

// ▶ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });

// ▶ Prepare input parameters
const inputParameters = { word: 'dog' };

// 🚀▶ Execute the Pipeline
const result = await pipelineExecutor(inputParameters);

// ▶ Fail if the execution was not successful
assertsExecutionSuccessful(result);

// ▶ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);

💕 Usage of multiple LLM providers

You can use multiple LLM providers in one Promptbook execution. The best model will be chosen automatically according to the prompt and the model's capabilities.

import { createPipelineExecutor, createCollectionFromDirectory, assertsExecutionSuccessful } from '@promptbook/core';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { OpenAiExecutionTools } from '@promptbook/openai';

// ▶ Create whole pipeline collection
const collection = await createCollectionFromDirectory('./promptbook-collection');

// ▶ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);

// ▶ Prepare multiple tools
const tools = {
    llm: [
        // Note: 💕 You can use multiple LLM providers in one Promptbook execution.
        //       The best model will be chosen automatically according to the prompt and the model's capabilities.
        new AnthropicClaudeExecutionTools(
            //            <- TODO: [🧱] Implement in a functional (not new Class) way
            {
                apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
            },
        ),
        new OpenAiExecutionTools(
            //            <- TODO: [🧱] Implement in a functional (not new Class) way
            {
                apiKey: process.env.OPENAI_API_KEY,
            },
        ),
        new AzureOpenAiExecutionTools(
            //            <- TODO: [🧱] Implement in a functional (not new Class) way
            {
                resourceName: process.env.AZUREOPENAI_RESOURCE_NAME,
                deploymentName: process.env.AZUREOPENAI_DEPLOYMENT_NAME,
                apiKey: process.env.AZUREOPENAI_API_KEY,
            },
        ),
    ],
    script: [
        new JavascriptExecutionTools(),
        //            <- TODO: [🧱] Implement in a functional (not new Class) way
    ],
};

// ▶ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });

// ▶ Prepare input parameters
const inputParameters = { word: 'bunny' };

// 🚀▶ Execute the Pipeline
const result = await pipelineExecutor(inputParameters);

// ▶ Fail if the execution was not successful
assertsExecutionSuccessful(result);

// ▶ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);

💙 Integration with other models

See the other models available in the Promptbook package:


Rest of the documentation is common for entire promptbook ecosystem:

🤍 The Promptbook Whitepaper

If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.

But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd responses. When this happens, you generally have three options:

  1. Fine-tune the model to your specifications or even train your own.
  2. Prompt-engineer the prompt to the best shape you can achieve.
  3. Orchestrate multiple prompts in a pipeline to get the best result.

In all of these situations, but especially in 3., the Promptbook library can make your life easier.

  • Separates concerns between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.
  • Establishes a common format .ptbk.md that can be used to describe your prompt business logic without having to write code or deal with the technicalities of LLMs.
  • Forget about low-level details like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. Just write your intent and persona who should be responsible for the task and let the library do the rest.
  • Has built-in orchestration of pipeline execution and many tools to make the process easier, more reliable, and more efficient, such as caching, compilation+preparation, just-in-time fine-tuning, expectation-aware generation, agent adversary expectations, and more.
  • Sometimes even the best prompts with the best framework like Promptbook :) can't avoid the problems. In this case, the library has built-in anomaly detection and logging to help you find and fix the problems.
  • Promptbook has built in versioning. You can test multiple A/B versions of pipelines and see which one works best.
  • Promptbook is designed to do RAG (Retrieval-Augmented Generation) and other advanced techniques. You can use knowledge to improve the quality of the output.

🧔 Pipeline (for prompt-engeneers)

Prompt book markdown file (or .ptbk.md file) is document that describes a pipeline - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.

  • Multiple pipelines forms a collection which will handle core know-how of your LLM application.
  • Theese pipelines are designed such as they can be written by non-programmers.

Sample:

File write-website-content.ptbk.md:

🌍 Create website content

Instructions for creating web page content.

  • PIPELINE URL https://promptbook.studio/webgpt/write-website-content.ptbk.md
  • INPUT  PARAM {rawTitle} Automatically suggested a site name or empty text
  • INPUT  PARAM {rawAssigment} Automatically generated site entry from image recognition
  • OUTPUT PARAM {websiteContent} Web content
  • OUTPUT PARAM {keywords} Keywords

👤 Specifying the assigment

What is your web about?

  • DIALOG TEMPLATE
{rawAssigment}

-> {assigment} Website assignment and specification

✨ Improving the title

  • PERSONA Jane, Copywriter and Marketing Specialist.
As an experienced marketing specialist, you have been entrusted with improving the name of your client's business.

A suggested name from a client:
"{rawTitle}"

Assignment from customer:

> {assigment}

## Instructions:

-   Write only one name suggestion
-   The name will be used on the website, business cards, visuals, etc.

-> {enhancedTitle} Enhanced title

👤 Website title approval

Is the title for your website okay?

  • DIALOG TEMPLATE
{enhancedTitle}

-> {title} Title for the website

🐰 Cunning subtitle

  • PERSONA Josh, a copywriter, tasked with creating a claim for the website.
As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page.

A website assignment from a customer:

> {assigment}

## Instructions:

-   Write only one name suggestion
-   Claim will be used on website, business cards, visuals, etc.
-   Claim should be punchy, funny, original

-> {claim} Claim for the web

🚦 Keyword analysis

  • PERSONA Paul, extremely creative SEO specialist.
As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}".

Website assignment from the customer:

> {assigment}

## Instructions:

-   Write a list of keywords
-   Keywords are in basic form

## Example:

-   Ice cream
-   Olomouc
-   Quality
-   Family
-   Tradition
-   Italy
-   Craft

-> {keywords} Keywords

🔗 Combine the beginning

  • SIMPLE TEMPLATE

# {title}

> {claim}

-> {contentBeginning} Beginning of web content

🖋 Write the content

  • PERSONA Jane
As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}.

A website assignment from a customer:

> {assigment}

## Instructions:

-   Text formatting is in Markdown
-   Be concise and to the point
-   Use keywords, but they should be naturally in the text
-   This is the complete content of the page, so don't forget all the important information and elements the page should contain
-   Use headings, bullets, text formatting

## Keywords:

{keywords}

## Web Content:

{contentBeginning}

-> {contentBody} Middle of the web content

🔗 Combine the content

  • SIMPLE TEMPLATE
{contentBeginning}

{contentBody}

-> {websiteContent}

Following is the scheme how the promptbook above is executed:

%% 🔮 Tip: Open this on GitHub or in the VSCode website to see the Mermaid graph visually

flowchart LR
  subgraph "🌍 Create website content"

      direction TB

      input((Input)):::input
      templateSpecifyingTheAssigment(👤 Specifying the assigment)
      input--"{rawAssigment}"-->templateSpecifyingTheAssigment
      templateImprovingTheTitle(✨ Improving the title)
      input--"{rawTitle}"-->templateImprovingTheTitle
      templateSpecifyingTheAssigment--"{assigment}"-->templateImprovingTheTitle
      templateWebsiteTitleApproval(👤 Website title approval)
      templateImprovingTheTitle--"{enhancedTitle}"-->templateWebsiteTitleApproval
      templateCunningSubtitle(🐰 Cunning subtitle)
      templateWebsiteTitleApproval--"{title}"-->templateCunningSubtitle
      templateSpecifyingTheAssigment--"{assigment}"-->templateCunningSubtitle
      templateKeywordAnalysis(🚦 Keyword analysis)
      templateWebsiteTitleApproval--"{title}"-->templateKeywordAnalysis
      templateSpecifyingTheAssigment--"{assigment}"-->templateKeywordAnalysis
      templateCombineTheBeginning(🔗 Combine the beginning)
      templateWebsiteTitleApproval--"{title}"-->templateCombineTheBeginning
      templateCunningSubtitle--"{claim}"-->templateCombineTheBeginning
      templateWriteTheContent(🖋 Write the content)
      templateWebsiteTitleApproval--"{title}"-->templateWriteTheContent
      templateSpecifyingTheAssigment--"{assigment}"-->templateWriteTheContent
      templateKeywordAnalysis--"{keywords}"-->templateWriteTheContent
      templateCombineTheBeginning--"{contentBeginning}"-->templateWriteTheContent
      templateCombineTheContent(🔗 Combine the content)
      templateCombineTheBeginning--"{contentBeginning}"-->templateCombineTheContent
      templateWriteTheContent--"{contentBody}"-->templateCombineTheContent

      templateCombineTheContent--"{websiteContent}"-->output
      output((Output)):::output

      classDef input color: grey;
      classDef output color: grey;

  end;

Note: We are using postprocessing functions like unwrapResult that can be used to postprocess the result.

📦 Packages

This library is divided into several packages, all are published from single monorepo. You can install all of them at once:

npm i ptbk

Or you can install them separately:

⭐ Marked packages are worth to try first

📚 Dictionary

The following glossary is used to clarify certain concepts:

Core concepts

Advanced concepts

🔌 Usage in Typescript / Javascript

➕➖ When to use Promptbook?

➕ When to use

  • When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
  • When you want to separate code from text prompts
  • When you want to describe complex prompt pipelines and don't want to do it in the code
  • When you want to orchestrate multiple prompts together
  • When you want to reuse parts of prompts in multiple places
  • When you want to version your prompts and test multiple versions
  • When you want to log the execution of prompts and backtrace the issues

See more

➖ When not to use

  • When you have already implemented single simple prompt and it works fine for your job
  • When OpenAI Assistant (GPTs) is enough for you
  • When you need streaming (this may be implemented in the future, see discussion).
  • When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
  • When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
  • When you need to use recursion (see the discussion)

See more

🐜 Known issues

🧼 Intentionally not implemented features

❔ FAQ

If you have a question start a discussion, open an issue or write me an email.

⌚ Changelog

See CHANGELOG.md

📜 License

Promptbook by Pavol Hejný is licensed under CC BY 4.0

🎯 Todos

See TODO.md

🖋️ Contributing

I am open to pull requests, feedback, and suggestions. Or if you like this utility, you can ☕ buy me a coffee or donate via cryptocurrencies.

You can also ⭐ star the promptbook package, follow me on GitHub or various other social networks.

Keywords

FAQs

Package last updated on 22 Oct 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc