Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socketโs threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
@promptbook/anthropic-claude
Advanced tools
Build responsible, controlled and transparent applications on top of LLM models!
.docx
, .doc
and .pdf
documentsโ Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.
@promptbook/anthropic-claude
@promptbook/anthropic-claude
is one part of the promptbook ecosystem.To install this package, run:
# Install entire promptbook ecosystem
npm i ptbk
# Install just this package to save space
npm install @promptbook/anthropic-claude
@promptbook/anthropic-claude
integrates Anthropic's Claude API with Promptbook. It allows to execute Promptbooks with OpenAI Claude 2 and 3 models.
import { createPipelineExecutor, createCollectionFromDirectory, assertsExecutionSuccessful } from '@promptbook/core';
import {
createCollectionFromDirectory,
$provideExecutionToolsForNode,
$provideFilesystemForNode,
} from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { AnthropicClaudeExecutionTools } from '@promptbook/anthropic-claude';
// โถ Prepare tools
const fs = $provideFilesystemForNode();
const llm = new AnthropicClaudeExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
isVerbose: true,
apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
},
);
const executables = await $provideExecutablesForNode();
const tools = {
llm,
fs,
scrapers: await $provideScrapersForNode({ fs, llm, executables }),
script: [new JavascriptExecutionTools()],
};
// โถ Create whole pipeline collection
const collection = await createCollectionFromDirectory('./promptbook-collection', tools);
// โถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);
// โถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
// โถ Prepare input parameters
const inputParameters = { word: 'rabbit' };
// ๐โถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters);
// โถ Fail if the execution was not successful
assertsExecutionSuccessful(result);
// โถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
You can just use $provideExecutionToolsForNode
function to create all required tools from environment variables like ANTHROPIC_CLAUDE_API_KEY
and OPENAI_API_KEY
automatically.
import { createPipelineExecutor, createCollectionFromDirectory, assertsExecutionSuccessful } from '@promptbook/core';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { $provideExecutionToolsForNode } from '@promptbook/node';
import { $provideFilesystemForNode } from '@promptbook/node';
// โถ Prepare tools
const tools = await $provideExecutionToolsForNode();
// โถ Create whole pipeline collection
const collection = await createCollectionFromDirectory('./promptbook-collection', tools);
// โถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);
// โถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
// โถ Prepare input parameters
const inputParameters = { word: 'dog' };
// ๐โถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters);
// โถ Fail if the execution was not successful
assertsExecutionSuccessful(result);
// โถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
You can use multiple LLM providers in one Promptbook execution. The best model will be chosen automatically according to the prompt and the model's capabilities.
import { createPipelineExecutor, createCollectionFromDirectory, assertsExecutionSuccessful } from '@promptbook/core';
import { $provideExecutionToolsForNode } from '@promptbook/node';
import { $provideFilesystemForNode } from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { OpenAiExecutionTools } from '@promptbook/openai';
// โถ Prepare multiple tools
const fs = $provideFilesystemForNode();
const llm = [
// Note: ๐ You can use multiple LLM providers in one Promptbook execution.
// The best model will be chosen automatically according to the prompt and the model's capabilities.
new AnthropicClaudeExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
},
),
new OpenAiExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
apiKey: process.env.OPENAI_API_KEY,
},
),
new AzureOpenAiExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
resourceName: process.env.AZUREOPENAI_RESOURCE_NAME,
deploymentName: process.env.AZUREOPENAI_DEPLOYMENT_NAME,
apiKey: process.env.AZUREOPENAI_API_KEY,
},
),
];
const executables = await $provideExecutablesForNode();
const tools = {
llm,
fs,
scrapers: await $provideScrapersForNode({ fs, llm, executables }),
script: [new JavascriptExecutionTools()],
};
// โถ Create whole pipeline collection
const collection = await createCollectionFromDirectory('./promptbook-collection', tools);
// โถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);
// โถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
// โถ Prepare input parameters
const inputParameters = { word: 'bunny' };
// ๐โถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters);
// โถ Fail if the execution was not successful
assertsExecutionSuccessful(result);
// โถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
See the other models available in the Promptbook package:
Rest of the documentation is common for entire promptbook ecosystem:
If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 3, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w๐๐ขrd resp0nses. When this happens, you generally have three options:
In all of these situations, but especially in 3., the โจ Promptbook can make your life waaaaaaaaaay easier.
temperature
, top-k
, top-p
, or kernel sampling. Just write your intent and persona who should be responsible for the task and let the library do the rest.:)
can't avoid the problems. In this case, the library has built-in anomaly detection and logging to help you find and fix the problems.Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:
Project | Description | Link |
---|---|---|
Core | Promptbook core is a description and documentation of basic innerworkings how should be Promptbook implemented and defines which fetures must be descriable by book language | https://ptbk.io https://github.com/webgptorg/book |
Book language | Book is a markdown-like language to define core entities like projects, pipelines, knowledge,.... It is designed to be understandable by non-programmers and non-technical people | |
Promptbook typescript project | Implementation of Promptbook in TypeScript published into multiple packages to NPM | https://github.com/webgptorg/promptbook + Multiple packages on NPM |
Promptbook studio | No-code studio to write book without need to write even the markdown | https://promptbook.studio https://github.com/hejny/promptbook-studio |
Promptbook miniapps | Builder of LLM miniapps from book notation |
Following is the documentation and blueprint of the Book language.
# ๐ My first Book
- PERSONA Jane, marketing specialist with prior experience in writing articles about technology and artificial intelligence
- KNOWLEDGE https://ptbk.io
- KNOWLEDGE ./promptbook.pdf
- EXPECT MIN 1 Sentence
- EXPECT MAX 1 Paragraph
> Write an article about the future of artificial intelligence in the next 10 years and how metalanguages will change the way AI is used in the world.
> Look specifically at the impact of Promptbook on the AI industry.
-> {article}
File is designed to be easy to read and write. It is strict subset of markdown. It is designed to be understandable by both humans and machines and without specific knowledge of the language.
It has file with .ptbk.md
or .book
extension with UTF-8
non BOM encoding.
As it is source code, it can leverage all the features of version control systems like git and does not suffer from the problems of binary formats, proprietary formats, or no-code solutions.
But unlike programming languages, it is designed to be understandable by non-programmers and non-technical people.
Book is divided into sections. Each section starts with heading. The language itself is not sensitive to the type of heading (h1
, h2
, h3
, ...) but it is recommended to use h1
for header section and h2
for other sections.
Header is the first section of the book. It contains metadata about the pipeline. It is recommended to use h1
heading for header section but it is not required.
Foo bar
Reserved words:
PERSONA
, EXPECT
, KNOWLEDGE
, etc.content
context
knowledge
examples
modelName
currentDate
Todo todo
Todo todo
Todo todo
This library is divided into several packages, all are published from single monorepo. You can install all of them at once:
npm i ptbk
Or you can install them separately:
โญ Marked packages are worth to try first
ptbk
.pdf
documents.docx
, .odt
,โฆ.doc
, .rtf
,โฆThe following glossary is used to clarify certain concepts:
Note: Thos section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook
If you have a question start a discussion, open an issue or write me an email.
See CHANGELOG.md
Promptbook by Pavol Hejnรฝ is licensed under CC BY 4.0
See TODO.md
I am open to pull requests, feedback, and suggestions. Or if you like this utility, you can โ buy me a coffee or donate via cryptocurrencies.
You can also โญ star the promptbook package, follow me on GitHub or various other social networks.
FAQs
Supercharge your use of large language models
We found that @promptbook/anthropic-claude demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.ย It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socketโs threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.