Socket
Socket
Sign inDemoInstall

@promptbook/remote-client

Package Overview
Dependencies
Maintainers
0
Versions
401
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@promptbook/remote-client - npm Package Compare versions

Comparing version 0.63.0-10 to 0.63.0

2

esm/index.es.js

@@ -7,3 +7,3 @@ import { io } from 'socket.io-client';

*/
var PROMPTBOOK_VERSION = '0.63.0-9';
var PROMPTBOOK_VERSION = '0.63.0-10';
// TODO: !!!! List here all the versions and annotate + put into script

@@ -10,0 +10,0 @@

@@ -1,2 +0,1 @@

import { PROMPTBOOK_VERSION } from '../version';
import type { PipelineCollection } from '../collection/PipelineCollection';

@@ -208,3 +207,2 @@ import type { Command } from '../commands/_common/types/Command';

import type { string_promptbook_version } from '../version';
export { PROMPTBOOK_VERSION };
export type { PipelineCollection };

@@ -211,0 +209,0 @@ export type { Command };

@@ -9,2 +9,3 @@ import type { Command as Program } from 'commander';

/**
* TODO: [๐Ÿฅƒ] !!! Allow `ptbk make` without llm tools
* TODO: Maybe remove this command - "about" command should be enough?

@@ -11,0 +12,0 @@ * TODO: [0] DRY Javascript and typescript - Maybe make ONLY typescript and for javascript just remove types

@@ -39,2 +39,3 @@ import type { LlmExecutionTools } from '../../execution/LlmExecutionTools';

* TODO: [๐Ÿ‘ทโ€โ™‚๏ธ] @@@ Manual about construction of llmTools
* TODO: [๐Ÿฅƒ] Allow `ptbk make` without llm tools
*/

@@ -20,2 +20,3 @@ import type { LlmExecutionToolsWithTotalUsage } from './utils/count-total-usage/LlmExecutionToolsWithTotalUsage';

* TODO: [๐Ÿ‘ทโ€โ™‚๏ธ] @@@ Manual about construction of llmTools
* TODO: [๐Ÿฅƒ] Allow `ptbk make` without llm tools
*/
{
"name": "@promptbook/remote-client",
"version": "0.63.0-10",
"version": "0.63.0",
"description": "Supercharge your use of large language models",

@@ -50,3 +50,3 @@ "private": false,

"peerDependencies": {
"@promptbook/core": "0.63.0-10"
"@promptbook/core": "0.63.0"
},

@@ -53,0 +53,0 @@ "dependencies": {

@@ -74,2 +74,11 @@ <!-- โš ๏ธ WARNING: This code has been generated so that any manual changes will be overwritten -->

## ๐Ÿง” Promptbook _(for prompt-engeneers)_
**P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
- Multiple pipelines forms a **collection** which will handle core **know-how of your LLM application**.
- Theese pipelines are designed such as they **can be written by non-programmers**.
### Sample:

@@ -81,2 +90,4 @@

> # ๐ŸŒ Create website content

@@ -298,3 +309,4 @@ >

[More template samples](./samples/templates/)
- [More template samples](./samples/templates/)
- [Read more about `.ptbk.md` file format here](https://github.com/webgptorg/promptbook/discussions/categories/concepts?discussions_q=is%3Aopen+label%3A.ptbk.md+category%3AConcepts)

@@ -316,3 +328,2 @@ _Note: We are using [postprocessing functions](#postprocessing-functions) like `unwrapResult` that can be used to postprocess the result._

- โญ **[ptbk](https://www.npmjs.com/package/ptbk)** - Bundle of all packages, when you want to install everything and you don't care about the size

@@ -341,260 +352,36 @@ - **[promptbook](https://www.npmjs.com/package/promptbook)** - Same as `ptbk`

The following glossary is used to clarify certain basic concepts:
The following glossary is used to clarify certain concepts:
### Prompt
Prompt in a text along with model requirements, but without any execution or templating logic.
For example:
### Core concepts
```json
{
"request": "Which sound does a cat make?",
"modelRequirements": {
"variant": "CHAT"
}
}
```
- [๐Ÿ“š Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
- [๐Ÿ“ฏ Pipeline](https://github.com/webgptorg/promptbook/discussions/64)
- [๐ŸŽบ Pipeline templates](https://github.com/webgptorg/promptbook/discussions/88)
- [๐Ÿคผ Personas](https://github.com/webgptorg/promptbook/discussions/22)
- [โญ• Parameters](https://github.com/webgptorg/promptbook/discussions/83)
- [๐Ÿš€ Pipeline execution](https://github.com/webgptorg/promptbook/discussions/84)
- [๐Ÿงช Expectations](https://github.com/webgptorg/promptbook/discussions/30)
- [โœ‚๏ธ Postprocessing](https://github.com/webgptorg/promptbook/discussions/31)
- [๐Ÿ”ฃ Words not tokens](https://github.com/webgptorg/promptbook/discussions/29)
- [โ˜ฏ Separation of concerns](https://github.com/webgptorg/promptbook/discussions/32)
```json
{
"request": "I am a cat.\nI like to eat fish.\nI like to sleep.\nI like to play with a ball.\nI l",
"modelRequirements": {
"variant": "COMPLETION"
}
}
```
### Advanced concepts
### Prompt Template
- [๐Ÿ“š Knowledge (Retrieval-augmented generation)](https://github.com/webgptorg/promptbook/discussions/41)
- [๐ŸŒ Remote server](https://github.com/webgptorg/promptbook/discussions/89)
- [๐Ÿƒ Jokers (conditions)](https://github.com/webgptorg/promptbook/discussions/66)
- [๐Ÿ”ณ Metaprompting](https://github.com/webgptorg/promptbook/discussions/35)
- [๐ŸŒ Linguistically typed languages](https://github.com/webgptorg/promptbook/discussions/53)
- [๐ŸŒ Auto-Translations](https://github.com/webgptorg/promptbook/discussions/42)
- [๐Ÿ“ฝ Images, audio, video, spreadsheets](https://github.com/webgptorg/promptbook/discussions/54)
- [๐Ÿ”™ Expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37)
- [โณ Just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33)
- [๐Ÿ”ด Anomaly detection](https://github.com/webgptorg/promptbook/discussions/40)
- [๐Ÿ‘ฎ Agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39)
- [view more](https://github.com/webgptorg/promptbook/discussions/categories/concepts)
Similar concept to Prompt, but with templating logic.
## ๐Ÿ”Œ Usage in Typescript / Javascript
For example:
```json
{
"request": "Which sound does a {animalName} make?",
"modelRequirements": {
"variant": "CHAT"
}
}
```
### Model Requirements
Abstract way to specify the LLM.
It does not specify the LLM with concrete version itself, only the requirements for the LLM.
_NOT chatgpt-3.5-turbo BUT CHAT variant of GPT-3.5._
For example:
```json
{
"variant": "CHAT",
"version": "GPT-3.5",
"temperature": 0.7
}
```
### Block type
Each block of promptbook can have a different execution type.
It is specified in list of requirements for the block.
By default, it is `Prompt template`
- _(default)_ `Prompt template` The block is a prompt template and is executed by LLM (OpenAI, Azure,...)
- `SIMPLE TEMPLATE` The block is a simple text template which is just filled with parameters
- `Script` The block is a script that is executed by some script runtime, the runtime is determined by block type, currently only `javascript` is supported but we plan to add `python` and `typescript` in the future.
- `PROMPT DIALOG` Ask user for input
### Parameters
Parameters that are placed in the prompt template and replaced to create the prompt.
It is a simple key-value object.
```json
{
"animalName": "cat",
"animalSound": "Meow!"
}
```
There are three types of template parameters, depending on how they are used in the promptbook:
- **INPUT PARAMETER**s are required to execute the promptbook.
- **Intermediate parameters** are used internally in the promptbook.
- **OUTPUT PARAMETER**s are explicitelly marked and they are returned as the result of the promptbook execution.
_Note: Parameter can be both intermedite and output at the same time._
### Promptbook
Promptbook is **core concept of this library**.
It represents a series of prompt templates chained together to form a **pipeline** / one big prompt template with input and result parameters.
Internally it can have multiple formats:
- **.ptbk.md file** in custom markdown format described above
- _(concept)_ **.ptbk** format, custom fileextension based on markdown
- _(internal)_ **JSON** format, parsed from the .ptbk.md file
### Promptbook **Library**
Library of all promptbooks used in your application.
Each promptbook is a separate `.ptbk.md` file with unique `PIPELINE URL`. Theese urls are used to reference promptbooks in other promptbooks or in the application code.
### Prompt Result
Prompt result is the simplest concept of execution.
It is the result of executing one prompt _(NOT a template)_.
For example:
```json
{
"response": "Meow!",
"model": "chatgpt-3.5-turbo"
}
```
### Execution Tools
`ExecutionTools` is an interface which contains all the tools needed to execute prompts.
It contais 3 subtools:
- `LlmExecutionTools`
- `ScriptExecutionTools`
- `UserInterfaceTools`
Which are described below:
#### LLM Execution Tools
`LlmExecutionTools` is a container for all the tools needed to execute prompts to large language models like GPT-4.
On its interface it exposes common methods for prompt execution.
Internally it calls OpenAI, Azure, GPU, proxy, cache, logging,...
`LlmExecutionTools` an abstract interface that is implemented by concrete execution tools:
- `OpenAiExecutionTools`
- `AnthropicClaudeExecutionTools`
- `AzureOpenAiExecutionTools`
- `LangtailExecutionTools`
- _(Not implemented yet)_ `BardExecutionTools`
- _(Not implemented yet)_ `LamaExecutionTools`
- _(Not implemented yet)_ `GpuExecutionTools`
- Special case are `RemoteLlmExecutionTools` that connect to a remote server and run one of the above execution tools on that server.
- Another special case is `MockedEchoLlmExecutionTools` that is used for testing and mocking.
- The another special case is `LogLlmExecutionToolsWrapper` that is technically also an execution tools but it is more proxy wrapper around other execution tools that logs all calls to execution tools.
#### Script Execution Tools
`ScriptExecutionTools` is an abstract container that represents all the tools needed to EXECUTE SCRIPTs. It is implemented by concrete execution tools:
- `JavascriptExecutionTools` is a wrapper around `vm2` module that executes javascript code in a sandbox.
- `JavascriptEvalExecutionTools` is wrapper around `eval` function that executes javascript. It is used for testing and mocking **NOT intended to use in the production** due to its unsafe nature, use `JavascriptExecutionTools` instead.
- _(Not implemented yet)_ `TypescriptExecutionTools` executes typescript code in a sandbox.
- _(Not implemented yet)_ `PythonExecutionTools` executes python code in a sandbox.
There are [postprocessing functions](#postprocessing-functions) that can be used to postprocess the result.
#### User Interface Tools
`UserInterfaceTools` is an abstract container that represents all the tools needed to interact with the user. It is implemented by concrete execution tools:
- _(Not implemented yet)_ `ConsoleInterfaceTools` is a wrapper around `readline` module that interacts with the user via console.
- `SimplePromptInterfaceTools` is a wrapper around `window.prompt` synchronous function that interacts with the user via browser prompt. It is used for testing and mocking **NOT intended to use in the production** due to its synchronous nature.
- `CallbackInterfaceTools` delagates the user interaction to a async callback function. You need to provide your own implementation of this callback function and its bind to UI.
### Executor
Executor is a simple async function that takes **input parameters** and returns **output parameters**.
It is constructed by combining execution tools and promptbook to execute together.
### ๐Ÿƒ Jokers (conditions)
Joker is a previously defined parameter that is used to bypass some parts of the pipeline.
If the joker is present in the template, it is checked to see if it meets the requirements (without postprocessing), and if so, it is used instead of executing that prompt template. There can be multiple wildcards in a prompt template, if so they are checked in order and the first one that meets the requirements is used.
If none of the jokers meet the requirements, the prompt template is executed as usual.
This can be useful, for example, if you want to use some predefined data, or if you want to use some data from the user, but you are not sure if it is suitable form.
When using wildcards, you must have at least one minimum expectation. If you do not have a minimum expectation, the joker will always fulfil the expectation because it has none, so it makes no logical sense.
Look at [jokers.ptbk.md](samples/templates/41-jokers.ptbk.md) sample.
### Postprocessing functions
You can define postprocessing functions when creating `JavascriptEvalExecutionTools`:
```
```
Additionally there are some usefull string-manipulation build-in functions, which are [listed here](src/scripting/javascript/JavascriptEvalExecutionTools.ts).
### Expectations
`Expect` command describes the desired output of the prompt template (after post-processing)
It can set limits for the maximum/minimum length of the output, measured in characters, words, sentences, paragraphs,...
_Note: LLMs work with tokens, not characters, but in Promptbooks we want to use some human-recognisable and cross-model interoperable units._
```markdown
# โœจ Sample: Expectations
- INPUTโ€ฏโ€ฏPARAMETER {yourName} Name of the hero
## ๐Ÿ’ฌ Question
- EXPECT MAX 30 CHARACTERS
- EXPECT MIN 2 CHARACTERS
- EXPECT MAX 3 WORDS
- EXPECT EXACTLY 1 SENTENCE
- EXPECT EXACTLY 1 LINE
...
```
There are two types of expectations which are not strictly symmetrical:
#### Minimal expectations
- `EXPECT MIN 0 ...` is not valid minimal expectation. It makes no sense.
- `EXPECT JSON` is both minimal and maximal expectation
- When you are using `JOKER` in same prompt template, you need to have at least one minimal expectation
#### Maximal expectations
- `EXPECT MAX 0 ...` is valid maximal expectation. For example, you can expect 0 pages and 2 sentences.
- `EXPECT JSON` is both minimal and maximal expectation
Look at [expectations.ptbk.md](samples/templates/45-expectations.ptbk.md) and [expect-json.ptbk.md](samples/templates/45-expect-json.ptbk.md) samples for more.
### Execution report
Execution report is a simple object or markdown that contains information about the execution of the pipeline.
[See the example of such a report](/samples/templates/50-advanced.report.md)
### Remote server
Remote server is a proxy server that uses its execution tools internally and exposes the executor interface externally.
You can simply use `RemoteExecutionTools` on client-side javascript and connect to your remote server.
This is useful to make all logic on browser side but not expose your API keys or no need to use customer's GPU.
## ๐Ÿ‘จโ€๐Ÿ’ป Usage and integration _(for developers)_
### ๐Ÿ”Œ Usage in Typescript / Javascript
- [Simple usage](./samples/usage/simple-script)

@@ -621,10 +408,15 @@ - [Usage with client and remote server](./samples/usage/remote)

- [๐Ÿคธโ€โ™‚๏ธ Iterations not working yet](https://github.com/webgptorg/promptbook/discussions/55)
- [โคต๏ธ Imports not working yet](https://github.com/webgptorg/promptbook/discussions/34)
## ๐Ÿงผ Intentionally not implemented features
- [โžฟ No recursion](https://github.com/webgptorg/promptbook/discussions/38)
- [๐Ÿณ There are no types, just strings](https://github.com/webgptorg/promptbook/discussions/52)
## โ” FAQ
If you have a question [start a discussion](https://github.com/webgptorg/promptbook/discussions/), [open an issue](https://github.com/webgptorg/promptbook/issues) or [write me an email](https://www.pavolhejny.com/contact).

@@ -631,0 +423,0 @@

@@ -11,3 +11,3 @@ (function (global, factory) {

*/
var PROMPTBOOK_VERSION = '0.63.0-9';
var PROMPTBOOK_VERSION = '0.63.0-10';
// TODO: !!!! List here all the versions and annotate + put into script

@@ -14,0 +14,0 @@

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with โšก๏ธ by Socket Inc