Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@promptbook/anthropic-claude

Package Overview
Dependencies
Maintainers
0
Versions
262
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@promptbook/anthropic-claude - npm Package Compare versions

Comparing version 0.72.0-33 to 0.72.0-34

2

esm/typings/src/executables/apps/locateLibreoffice.d.ts

@@ -7,3 +7,3 @@ import type { string_executable_path } from '../../types/typeAliases';

*/
export declare function locateLibreoffice(): Promise<string_executable_path>;
export declare function locateLibreoffice(): Promise<string_executable_path | null>;
/**

@@ -10,0 +10,0 @@ * TODO: [🧠][♿] Maybe export through `@promptbook/node` OR `@promptbook/legacy-documents`

@@ -7,3 +7,3 @@ import type { string_executable_path } from '../../types/typeAliases';

*/
export declare function locatePandoc(): Promise<string_executable_path>;
export declare function locatePandoc(): Promise<string_executable_path | null>;
/**

@@ -10,0 +10,0 @@ * TODO: [🧠][♿] Maybe export through `@promptbook/node` OR `@promptbook/documents`

@@ -29,3 +29,3 @@ import type { RequireAtLeastOne } from 'type-fest';

*/
export declare function locateApp(options: RequireAtLeastOne<LocateAppOptions, 'linuxWhich' | 'windowsSuffix' | 'macOsName'>): Promise<string_executable_path>;
export declare function locateApp(options: RequireAtLeastOne<LocateAppOptions, 'linuxWhich' | 'windowsSuffix' | 'macOsName'>): Promise<string_executable_path | null>;
/**

@@ -32,0 +32,0 @@ * TODO: [🧠][♿] Maybe export through `@promptbook/node`

@@ -0,3 +1,3 @@

import type { string_executable_path } from '../../types/typeAliases';
import type { LocateAppOptions } from '../locateApp';
import type { string_executable_path } from '../../types/typeAliases';
/**

@@ -8,3 +8,3 @@ * @@@

*/
export declare function locateAppOnLinux({ appName, linuxWhich, }: Pick<Required<LocateAppOptions>, 'appName' | 'linuxWhich'>): Promise<string_executable_path>;
export declare function locateAppOnLinux({ appName, linuxWhich, }: Pick<Required<LocateAppOptions>, 'appName' | 'linuxWhich'>): Promise<string_executable_path | null>;
/**

@@ -11,0 +11,0 @@ * TODO: [🧠][♿] Maybe export through `@promptbook/node`

@@ -8,3 +8,3 @@ import type { string_executable_path } from '../../types/typeAliases';

*/
export declare function locateAppOnMacOs({ appName, macOsName, }: Pick<Required<LocateAppOptions>, 'appName' | 'macOsName'>): Promise<string_executable_path>;
export declare function locateAppOnMacOs({ appName, macOsName, }: Pick<Required<LocateAppOptions>, 'appName' | 'macOsName'>): Promise<string_executable_path | null>;
/**

@@ -11,0 +11,0 @@ * TODO: [🧠][♿] Maybe export through `@promptbook/node`

@@ -8,3 +8,3 @@ import type { string_executable_path } from '../../types/typeAliases';

*/
export declare function locateAppOnWindows({ appName, windowsSuffix, }: Pick<Required<LocateAppOptions>, 'appName' | 'windowsSuffix'>): Promise<string_executable_path>;
export declare function locateAppOnWindows({ appName, windowsSuffix, }: Pick<Required<LocateAppOptions>, 'appName' | 'windowsSuffix'>): Promise<string_executable_path | null>;
/**

@@ -11,0 +11,0 @@ * TODO: [🧠][♿] Maybe export through `@promptbook/node`

{
"name": "@promptbook/anthropic-claude",
"version": "0.72.0-33",
"version": "0.72.0-34",
"description": "Supercharge your use of large language models",

@@ -55,3 +55,3 @@ "private": false,

"peerDependencies": {
"@promptbook/core": "0.72.0-33"
"@promptbook/core": "0.72.0-34"
},

@@ -58,0 +58,0 @@ "dependencies": {

@@ -220,5 +220,5 @@ <!-- ⚠️ WARNING: This code has been generated so that any manual changes will be overwritten -->

If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 3, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd responses. When this happens, you generally have three options:
But often you will struggle with the **limitations of LLMs**, such as **hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd responses**. When this happens, you generally have three options:

@@ -229,3 +229,3 @@ 1. **Fine-tune** the model to your specifications or even train your own.

In all of these situations, but especially in 3., the Promptbook library can make your life easier.
In all of these situations, but especially in 3., the **✨ Promptbook can make your life waaaaaaaaaay easier**.

@@ -232,0 +232,0 @@ - [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc