Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More โ†’
Socket
Sign inDemoInstall
Socket

@promptbook/fake-llm

Package Overview
Dependencies
Maintainers
0
Versions
228
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@promptbook/fake-llm - npm Package Compare versions

Comparing version 0.72.0 to 0.73.0

esm/typings/src/execution/createPipelineExecutor/getExamplesForTemplate.d.ts

2

esm/typings/src/commands/TEMPLATE/TemplateTypes.d.ts

@@ -15,2 +15,2 @@ import type { TupleToUnion } from 'type-fest';

*/
export declare const TemplateTypes: readonly ["PROMPT_TEMPLATE", "SIMPLE_TEMPLATE", "SCRIPT_TEMPLATE", "DIALOG_TEMPLATE", "SAMPLE", "KNOWLEDGE", "INSTRUMENT", "ACTION"];
export declare const TemplateTypes: readonly ["PROMPT_TEMPLATE", "SIMPLE_TEMPLATE", "SCRIPT_TEMPLATE", "DIALOG_TEMPLATE", "EXAMPLE", "KNOWLEDGE", "INSTRUMENT", "ACTION"];

@@ -124,3 +124,3 @@ import type { CsvSettings } from './formats/csv/CsvSettings';

*/
export declare const RESERVED_PARAMETER_NAMES: readonly ["content", "context", "knowledge", "samples", "modelName", "currentDate"];
export declare const RESERVED_PARAMETER_NAMES: readonly ["content", "context", "knowledge", "examples", "modelName", "currentDate"];
/**

@@ -127,0 +127,0 @@ * @@@

@@ -25,7 +25,7 @@ import type { PipelineJson } from '../../types/PipelineJson/PipelineJson';

/**
* TODO: !!!!! FOREACH in mermaid graph
* TODO: !!!!! Knowledge in mermaid graph
* TODO: !!!!! Personas in mermaid graph
* TODO: [๐Ÿง ] !! FOREACH in mermaid graph
* TODO: [๐Ÿง ] !! Knowledge in mermaid graph
* TODO: [๐Ÿง ] !! Personas in mermaid graph
* TODO: Maybe use some Mermaid package instead of string templating
* TODO: [๐Ÿ•Œ] When more than 2 functionalities, split into separate functions
*/

@@ -12,3 +12,3 @@ import type { string_json } from '../../types/typeAliases';

/**
* TODO: [๐Ÿ] Not Working propperly @see https://promptbook.studio/samples/mixed-knowledge.ptbk.md
* TODO: [๐Ÿ] Not Working propperly @see https://promptbook.studio/examples/mixed-knowledge.ptbk.md
* TODO: [๐Ÿง ][0] Maybe rename to `stringifyPipelineJson`, `stringifyIndexedJson`,...

@@ -15,0 +15,0 @@ * TODO: [๐Ÿง ] Maybe more elegant solution than replacing via regex

@@ -10,3 +10,3 @@ import type { PipelineJson } from '../../types/PipelineJson/PipelineJson';

*
* @param path - The path to the file relative to samples/pipelines directory
* @param path - The path to the file relative to examples/pipelines directory
* @private internal function of tests

@@ -13,0 +13,0 @@ */

@@ -23,3 +23,3 @@ import type { PipelineJson } from '../../types/PipelineJson/PipelineJson';

/**
* TODO: !!!!! [๐Ÿงžโ€โ™€๏ธ] Do not allow joker + foreach
* TODO: !! [๐Ÿงžโ€โ™€๏ธ] Do not allow joker + foreach
* TODO: [๐Ÿง ] Work with promptbookVersion

@@ -36,3 +36,3 @@ * TODO: Use here some json-schema, Zod or something similar and change it to:

/**
* TODO: [๐Ÿงณ][main] !!!! Validate that all samples match expectations
* TODO: [๐Ÿงณ][main] !!!! Validate that all examples match expectations
* TODO: [๐Ÿงณ][๐Ÿ][main] !!!! Validate that knowledge is valid (non-void)

@@ -39,0 +39,0 @@ * TODO: [๐Ÿงณ][main] !!!! Validate that persona can be used only with CHAT variant

@@ -18,3 +18,3 @@ import type { string_mime_type } from '../../types/typeAliases';

*
* @sample "JSON"
* @example "JSON"
*/

@@ -29,3 +29,3 @@ readonly formatName: string_name & string_SCREAMING_CASE;

*
* @sample "application/json"
* @example "application/json"
*/

@@ -32,0 +32,0 @@ readonly mimeType?: string_mime_type;

@@ -14,3 +14,3 @@ import type { Promisable } from 'type-fest';

*
* @sample "CELL"
* @example "CELL"
*/

@@ -17,0 +17,0 @@ readonly subvalueName: string_name & string_SCREAMING_CASE;

@@ -13,3 +13,3 @@ import type { ExecutionTools } from '../execution/ExecutionTools';

/**
* TODO: [๐Ÿ”ƒ][main] !!!!! If the persona was prepared with different version or different set of models, prepare it once again
* TODO: [๐Ÿ”ƒ][main] !! If the persona was prepared with different version or different set of models, prepare it once again
* TODO: [๐Ÿข] !! Check validity of `modelName` in pipeline

@@ -16,0 +16,0 @@ * TODO: [๐Ÿข] !! Check validity of `systemMessage` in pipeline

@@ -9,3 +9,3 @@ import type { PipelineJson } from '../types/PipelineJson/PipelineJson';

/**
* TODO: [๐Ÿ”ƒ][main] !!!!! If the pipeline was prepared with different version or different set of models, prepare it once again
* TODO: [๐Ÿ”ƒ][main] !! If the pipeline was prepared with different version or different set of models, prepare it once again
* TODO: [๐Ÿ ] Maybe base this on `makeValidator`

@@ -15,4 +15,4 @@ * TODO: [๐ŸงŠ] Pipeline can be partially prepared, this should return true ONLY if fully prepared

* - [๐Ÿ] ? Is context in each template
* - [โ™จ] Are samples prepared
* - [โ™จ] Are examples prepared
* - [โ™จ] Are templates prepared
*/

@@ -27,3 +27,3 @@ import type { ExecutionTools } from '../execution/ExecutionTools';

* TODO: [๐Ÿง ] What is better name `prepareTemplate` or `prepareTemplateAndParameters`
* TODO: [โ™จ][main] !!! Prepare index the samples and maybe templates
* TODO: [โ™จ][main] !!! Prepare index the examples and maybe templates
* TODO: Write tests for `preparePipeline`

@@ -30,0 +30,0 @@ * TODO: [๐Ÿ] Leverage the batch API and build queues @see https://platform.openai.com/docs/guides/batch

export {};
/**
* TODO: [๐Ÿ““] Maybe test all file in samples (not just 10-simple.doc)
* TODO: [๐Ÿ““] Maybe test all file in examples (not just 10-simple.doc)
*/
export {};
/**
* TODO: [๐Ÿ““] Maybe test all file in samples (not just 10-simple.docx)
* TODO: [๐Ÿ““] Maybe test all file in examples (not just 10-simple.docx)
*/
export {};
/**
* TODO: [๐Ÿ““] Maybe test all file in samples (not just 10-simple.md)
* TODO: [๐Ÿ““] Maybe test all file in examples (not just 10-simple.md)
*/

@@ -30,6 +30,6 @@ import type { string_markdown_text } from '../typeAliases';

/**
* Sample values of the parameter
* Example values of the parameter
* Note: This values won't be actually used as some default values, but they are just for better understanding of the parameter
*/
readonly sampleValues?: Array<string_parameter_value>;
readonly exampleValues?: Array<string_parameter_value>;
};

@@ -36,0 +36,0 @@ /**

@@ -21,3 +21,3 @@ import type { PromptResultUsage } from '../../execution/PromptResultUsage';

* TODO: Maybe put here used `modelName`
* TODO: [๐Ÿฅ] When using `date` it changes all samples .ptbk.json files each time so until some more elegant solution omit the time from prepared pipeline
* TODO: [๐Ÿฅ] When using `date` it changes all examples .ptbk.json files each time so until some more elegant solution omit the time from prepared pipeline
*/
{
"name": "@promptbook/fake-llm",
"version": "0.72.0",
"version": "0.73.0",
"description": "Supercharge your use of large language models",

@@ -34,3 +34,4 @@ "private": false,

"o1-preview",
"anthropic"
"anthropic",
"LLMOps"
],

@@ -56,3 +57,3 @@ "license": "CC-BY-4.0",

"peerDependencies": {
"@promptbook/core": "0.72.0"
"@promptbook/core": "0.73.0"
},

@@ -59,0 +60,0 @@ "dependencies": {

@@ -21,2 +21,4 @@ <!-- โš ๏ธ WARNING: This code has been generated so that any manual changes will be overwritten -->

- ๐Ÿ’™ Working on [the **Book** language v1](https://github.com/webgptorg/book)
- ๐Ÿ“š Support of `.docx`, `.doc` and `.pdf` documents
- โœจ **Support of [OpenAI o1 model](https://openai.com/o1/)**

@@ -53,7 +55,5 @@

If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 3, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
But often you will struggle with the **limitations of LLMs**, such as **hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w๐’†๐ขrd responses**. When this happens, you generally have three options:
But often you will struggle with the **limitations of LLMs**, such as **hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w๐’†๐ขrd resp0nses**. When this happens, you generally have three options:

@@ -66,245 +66,35 @@ 1. **Fine-tune** the model to your specifications or even train your own.

- [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.
- Establishes a [**common format `.ptbk.md`**](https://github.com/webgptorg/promptbook/discussions/85) that can be used to describe your prompt business logic without having to write code or deal with the technicalities of LLMs.
- **Forget** about **low-level details** like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
- Has built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
- [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic. For this purpose, it introduces a new language called [the **๐Ÿ’™ Book**](https://github.com/webgptorg/book).
- Book allows you to **focus on the business** logic without having to write code or deal with the technicalities of LLMs.
- **Forget** about **low-level details** like choosing the right model, tokens, context size, `temperature`, `top-k`, `top-p`, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
- We have built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
- Sometimes even the best prompts with the best framework like Promptbook `:)` can't avoid the problems. In this case, the library has built-in **[anomaly detection](https://github.com/webgptorg/promptbook/discussions/40) and logging** to help you find and fix the problems.
- Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
- Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
- Versioning is build in. You can test multiple **A/B versions** of pipelines and see which one works best.
- Promptbook is designed to use [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques to bring the context of your business to generic LLM. You can use **knowledge** to improve the quality of the output.
## ๐Ÿง” Pipeline _(for prompt-engeneers)_
## ๐Ÿ’™ Book language _(for prompt-engineer)_
**P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
Promptbook [pipelines](https://github.com/webgptorg/promptbook/discussions/64) are written in markdown-like language called [Book](https://github.com/webgptorg/book). It is designed to be understandable by non-programmers and non-technical people.
- Multiple pipelines forms a **collection** which will handle core **know-how of your LLM application**.
- Theese pipelines are designed such as they **can be written by non-programmers**.
```markdown
# ๐ŸŒŸ My first Book
### Sample:
- PERSONA Jane, marketing specialist with prior experience in writing articles about technology and artificial intelligence
- KNOWLEDGE https://ptbk.io
- KNOWLEDGE ./promptbook.pdf
- EXPECT MIN 1 Sentence
- EXPECT MAX 1 Paragraph
File `write-website-content.ptbk.md`:
> Write an article about the future of artificial intelligence in the next 10 years and how metalanguages will change the way AI is used in the world.
> Look specifically at the impact of Promptbook on the AI industry.
> # ๐ŸŒ Create website content
>
> Instructions for creating web page content.
>
> - PIPELINE URL https://promptbook.studio/webgpt/write-website-content.ptbk.md
> - INPUTโ€ฏโ€ฏPARAM `{rawTitle}` Automatically suggested a site name or empty text
> - INPUTโ€ฏโ€ฏPARAM `{rawAssigment}` Automatically generated site entry from image recognition
> - OUTPUTโ€ฏPARAM `{websiteContent}` Web content
> - OUTPUTโ€ฏPARAM `{keywords}` Keywords
>
> ## ๐Ÿ‘ค Specifying the assigment
>
> What is your web about?
>
> - DIALOG TEMPLATE
>
> ```
> {rawAssigment}
> ```
>
> `-> {assigment}` Website assignment and specification
>
> ## โœจ Improving the title
>
> - PERSONA Jane, Copywriter and Marketing Specialist.
>
> ```
> As an experienced marketing specialist, you have been entrusted with improving the name of your client's business.
>
> A suggested name from a client:
> "{rawTitle}"
>
> Assignment from customer:
>
> > {assigment}
>
> ## Instructions:
>
> - Write only one name suggestion
> - The name will be used on the website, business cards, visuals, etc.
> ```
>
> `-> {enhancedTitle}` Enhanced title
>
> ## ๐Ÿ‘ค Website title approval
>
> Is the title for your website okay?
>
> - DIALOG TEMPLATE
>
> ```
> {enhancedTitle}
> ```
>
> `-> {title}` Title for the website
>
> ## ๐Ÿฐ Cunning subtitle
>
> - PERSONA Josh, a copywriter, tasked with creating a claim for the website.
>
> ```
> As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page.
>
> A website assignment from a customer:
>
> > {assigment}
>
> ## Instructions:
>
> - Write only one name suggestion
> - Claim will be used on website, business cards, visuals, etc.
> - Claim should be punchy, funny, original
> ```
>
> `-> {claim}` Claim for the web
>
> ## ๐Ÿšฆ Keyword analysis
>
> - PERSONA Paul, extremely creative SEO specialist.
>
> ```
> As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}".
>
> Website assignment from the customer:
>
> > {assigment}
>
> ## Instructions:
>
> - Write a list of keywords
> - Keywords are in basic form
>
> ## Example:
>
> - Ice cream
> - Olomouc
> - Quality
> - Family
> - Tradition
> - Italy
> - Craft
>
> ```
>
> `-> {keywords}` Keywords
>
> ## ๐Ÿ”— Combine the beginning
>
> - SIMPLE TEMPLATE
>
> ```
>
> # {title}
>
> > {claim}
>
> ```
>
> `-> {contentBeginning}` Beginning of web content
>
> ## ๐Ÿ–‹ Write the content
>
> - PERSONA Jane
>
> ```
> As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}.
>
> A website assignment from a customer:
>
> > {assigment}
>
> ## Instructions:
>
> - Text formatting is in Markdown
> - Be concise and to the point
> - Use keywords, but they should be naturally in the text
> - This is the complete content of the page, so don't forget all the important information and elements the page should contain
> - Use headings, bullets, text formatting
>
> ## Keywords:
>
> {keywords}
>
> ## Web Content:
>
> {contentBeginning}
> ```
>
> `-> {contentBody}` Middle of the web content
>
> ## ๐Ÿ”— Combine the content
>
> - SIMPLE TEMPLATE
>
> ```markdown
> {contentBeginning}
>
> {contentBody}
> ```
>
> `-> {websiteContent}`
Following is the scheme how the promptbook above is executed:
```mermaid
%% ๐Ÿ”ฎ Tip: Open this on GitHub or in the VSCode website to see the Mermaid graph visually
flowchart LR
subgraph "๐ŸŒ Create website content"
direction TB
input((Input)):::input
templateSpecifyingTheAssigment(๐Ÿ‘ค Specifying the assigment)
input--"{rawAssigment}"-->templateSpecifyingTheAssigment
templateImprovingTheTitle(โœจ Improving the title)
input--"{rawTitle}"-->templateImprovingTheTitle
templateSpecifyingTheAssigment--"{assigment}"-->templateImprovingTheTitle
templateWebsiteTitleApproval(๐Ÿ‘ค Website title approval)
templateImprovingTheTitle--"{enhancedTitle}"-->templateWebsiteTitleApproval
templateCunningSubtitle(๐Ÿฐ Cunning subtitle)
templateWebsiteTitleApproval--"{title}"-->templateCunningSubtitle
templateSpecifyingTheAssigment--"{assigment}"-->templateCunningSubtitle
templateKeywordAnalysis(๐Ÿšฆ Keyword analysis)
templateWebsiteTitleApproval--"{title}"-->templateKeywordAnalysis
templateSpecifyingTheAssigment--"{assigment}"-->templateKeywordAnalysis
templateCombineTheBeginning(๐Ÿ”— Combine the beginning)
templateWebsiteTitleApproval--"{title}"-->templateCombineTheBeginning
templateCunningSubtitle--"{claim}"-->templateCombineTheBeginning
templateWriteTheContent(๐Ÿ–‹ Write the content)
templateWebsiteTitleApproval--"{title}"-->templateWriteTheContent
templateSpecifyingTheAssigment--"{assigment}"-->templateWriteTheContent
templateKeywordAnalysis--"{keywords}"-->templateWriteTheContent
templateCombineTheBeginning--"{contentBeginning}"-->templateWriteTheContent
templateCombineTheContent(๐Ÿ”— Combine the content)
templateCombineTheBeginning--"{contentBeginning}"-->templateCombineTheContent
templateWriteTheContent--"{contentBody}"-->templateCombineTheContent
templateCombineTheContent--"{websiteContent}"-->output
output((Output)):::output
classDef input color: grey;
classDef output color: grey;
end;
-> {article}
```
- [More template samples](./samples/pipelines/)
- [Read more about `.ptbk.md` file format here](https://github.com/webgptorg/promptbook/discussions/categories/concepts?discussions_q=is%3Aopen+label%3A.ptbk.md+category%3AConcepts)
## ๐Ÿ“ฆ Packages _(for developers)_
_Note: We are using [postprocessing functions](#postprocessing-functions) like `unwrapResult` that can be used to postprocess the result._
## ๐Ÿ“ฆ Packages
This library is divided into several packages, all are published from [single monorepo](https://github.com/webgptorg/promptbook).

@@ -350,4 +140,2 @@ You can install all of them at once:

### Core concepts

@@ -383,4 +171,4 @@

- [Simple usage](./samples/usage/simple-script)
- [Usage with client and remote server](./samples/usage/remote)
- [Simple usage](./examples/usage/simple-script)
- [Usage with client and remote server](./examples/usage/remote)

@@ -387,0 +175,0 @@ ## โž•โž– When to use Promptbook?

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with โšก๏ธ by Socket Inc