Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socketโs threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
@promptbook/anthropic-claude
Advanced tools
Build responsible, controlled and transparent applications on top of LLM models!
.docx
, .doc
and .pdf
documentsโ Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.
@promptbook/anthropic-claude
@promptbook/anthropic-claude
is one part of the promptbook ecosystem.To install this package, run:
# Install entire promptbook ecosystem
npm i ptbk
# Install just this package to save space
npm install @promptbook/anthropic-claude
@promptbook/anthropic-claude
integrates Anthropic's Claude API with Promptbook. It allows to execute Promptbooks with OpenAI Claude 2 and 3 models.
import { createPipelineExecutor, createCollectionFromDirectory, assertsExecutionSuccessful } from '@promptbook/core';
import {
createCollectionFromDirectory,
$provideExecutionToolsForNode,
$provideFilesystemForNode,
} from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { AnthropicClaudeExecutionTools } from '@promptbook/anthropic-claude';
// โถ Prepare tools
const fs = $provideFilesystemForNode();
const llm = new AnthropicClaudeExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
isVerbose: true,
apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
},
);
const executables = await $provideExecutablesForNode();
const tools = {
llm,
fs,
scrapers: await $provideScrapersForNode({ fs, llm, executables }),
script: [new JavascriptExecutionTools()],
};
// โถ Create whole pipeline collection
const collection = await createCollectionFromDirectory('./promptbook-collection', tools);
// โถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);
// โถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
// โถ Prepare input parameters
const inputParameters = { word: 'rabbit' };
// ๐โถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters);
// โถ Fail if the execution was not successful
assertsExecutionSuccessful(result);
// โถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
You can just use $provideExecutionToolsForNode
function to create all required tools from environment variables like ANTHROPIC_CLAUDE_API_KEY
and OPENAI_API_KEY
automatically.
import { createPipelineExecutor, createCollectionFromDirectory, assertsExecutionSuccessful } from '@promptbook/core';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { $provideExecutionToolsForNode } from '@promptbook/node';
import { $provideFilesystemForNode } from '@promptbook/node';
// โถ Prepare tools
const tools = await $provideExecutionToolsForNode();
// โถ Create whole pipeline collection
const collection = await createCollectionFromDirectory('./promptbook-collection', tools);
// โถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);
// โถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
// โถ Prepare input parameters
const inputParameters = { word: 'dog' };
// ๐โถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters);
// โถ Fail if the execution was not successful
assertsExecutionSuccessful(result);
// โถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
You can use multiple LLM providers in one Promptbook execution. The best model will be chosen automatically according to the prompt and the model's capabilities.
import { createPipelineExecutor, createCollectionFromDirectory, assertsExecutionSuccessful } from '@promptbook/core';
import { $provideExecutionToolsForNode } from '@promptbook/node';
import { $provideFilesystemForNode } from '@promptbook/node';
import { JavascriptExecutionTools } from '@promptbook/execute-javascript';
import { OpenAiExecutionTools } from '@promptbook/openai';
// โถ Prepare multiple tools
const fs = $provideFilesystemForNode();
const llm = [
// Note: ๐ You can use multiple LLM providers in one Promptbook execution.
// The best model will be chosen automatically according to the prompt and the model's capabilities.
new AnthropicClaudeExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
apiKey: process.env.ANTHROPIC_CLAUDE_API_KEY,
},
),
new OpenAiExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
apiKey: process.env.OPENAI_API_KEY,
},
),
new AzureOpenAiExecutionTools(
// <- TODO: [๐งฑ] Implement in a functional (not new Class) way
{
resourceName: process.env.AZUREOPENAI_RESOURCE_NAME,
deploymentName: process.env.AZUREOPENAI_DEPLOYMENT_NAME,
apiKey: process.env.AZUREOPENAI_API_KEY,
},
),
];
const executables = await $provideExecutablesForNode();
const tools = {
llm,
fs,
scrapers: await $provideScrapersForNode({ fs, llm, executables }),
script: [new JavascriptExecutionTools()],
};
// โถ Create whole pipeline collection
const collection = await createCollectionFromDirectory('./promptbook-collection', tools);
// โถ Get single Pipeline
const pipeline = await collection.getPipelineByUrl(`https://promptbook.studio/my-collection/write-article.ptbk.md`);
// โถ Create executor - the function that will execute the Pipeline
const pipelineExecutor = createPipelineExecutor({ pipeline, tools });
// โถ Prepare input parameters
const inputParameters = { word: 'bunny' };
// ๐โถ Execute the Pipeline
const result = await pipelineExecutor(inputParameters);
// โถ Fail if the execution was not successful
assertsExecutionSuccessful(result);
// โถ Handle the result
const { isSuccessful, errors, outputParameters, executionReport } = result;
console.info(outputParameters);
See the other models available in the Promptbook package:
Rest of the documentation is common for entire promptbook ecosystem:
If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 3, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w๐๐ขrd resp0nses. When this happens, you generally have three options:
In all of these situations, but especially in 3., the โจ Promptbook can make your life waaaaaaaaaay easier.
temperature
, top-k
, top-p
, or kernel sampling. Just write your intent and persona who should be responsible for the task and let the library do the rest.:)
can't avoid the problems. In this case, the library has built-in anomaly detection and logging to help you find and fix the problems.Promptbook whitepaper | Basic motivations and problems which we are trying to solve | https://github.com/webgptorg/book |
Promptbook (system) | Promptbook ... | |
Book language | Book is a markdown-like language to define projects, pipelines, knowledge,... in the Promptbook system. It is designed to be understandable by non-programmers and non-technical people | |
Promptbook typescript project | Implementation of Promptbook in TypeScript published into multiple packages to NPM | https://github.com/webgptorg/promptbook |
Promptbook studio | Promptbook studio | https://github.com/hejny/promptbook-studio |
Promptbook miniapps | Promptbook miniapps |
Promptbook pipelines are written in markdown-like language called Book. It is designed to be understandable by non-programmers and non-technical people.
# ๐ My first Book
- INPUT PARAMETER {subject}
- OUTPUT PARAMETER {article}
## Sample subject
> Promptbook
-> {subject}
## Write an article
- PERSONA Jane, marketing specialist with prior experience in writing articles about technology and artificial intelligence
- KNOWLEDGE https://ptbk.io
- KNOWLEDGE ./promptbook.pdf
- EXPECT MIN 1 Sentence
- EXPECT MAX 1 Paragraph
> Write an article about the future of artificial intelligence in the next 10 years and how metalanguages will change the way AI is used in the world.
> Look specifically at the impact of {subject} on the AI industry.
-> {article}
This library is divided into several packages, all are published from single monorepo. You can install all of them at once:
npm i ptbk
Or you can install them separately:
โญ Marked packages are worth to try first
ptbk
.pdf
documents.docx
, .odt
,โฆ.doc
, .rtf
,โฆThe following glossary is used to clarify certain concepts:
If you have a question start a discussion, open an issue or write me an email.
See CHANGELOG.md
Promptbook by Pavol Hejnรฝ is licensed under CC BY 4.0
See TODO.md
I am open to pull requests, feedback, and suggestions. Or if you like this utility, you can โ buy me a coffee or donate via cryptocurrencies.
You can also โญ star the promptbook package, follow me on GitHub or various other social networks.
FAQs
Supercharge your use of large language models
The npm package @promptbook/anthropic-claude receives a total of 564 weekly downloads. As such, @promptbook/anthropic-claude popularity was classified as not popular.
We found that @promptbook/anthropic-claude demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.ย It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socketโs threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.