New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

llm-api

Package Overview
Dependencies
Maintainers
1
Versions
40
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

llm-api - npm Package Compare versions

Comparing version
1.4.1
to
1.5.0
+2
-0
dist/playground.js

@@ -50,2 +50,4 @@ "use strict";

console.info('Response 0: ', res0);
const res01 = await res0?.respond('Hello 2');
console.info('Response 0.1: ', res01);
const resEm = await client?.textCompletion('✨');

@@ -52,0 +54,0 @@ console.info('Response em: ', resEm);

+1
-1

@@ -118,3 +118,3 @@ "use strict";

respond: (message, opt) => this.chatCompletion([
...messages,
...initialMessages,
receivedMessage,

@@ -121,0 +121,0 @@ typeof message === 'string'

@@ -26,3 +26,3 @@ "use strict";

const finalRequestOptions = (0, lodash_1.defaults)(requestOptions, RequestDefaults);
const messages = (0, lodash_1.compact)([
const messagesWithSystem = (0, lodash_1.compact)([
finalRequestOptions.systemMessage

@@ -37,2 +37,5 @@ ? {

...initialMessages,
]);
const messages = (0, lodash_1.compact)([
...messagesWithSystem,
finalRequestOptions.responsePrefix

@@ -90,3 +93,5 @@ ? {

const content = finalRequestOptions.responsePrefix
? finalRequestOptions.responsePrefix + completion
? completion.startsWith(finalRequestOptions.responsePrefix)
? completion
: finalRequestOptions.responsePrefix + completion
: completion;

@@ -104,3 +109,3 @@ if (!content) {

respond: (message, opt) => this.chatCompletion([
...messages,
...messagesWithSystem,
receivedMessage,

@@ -107,0 +112,0 @@ typeof message === 'string'

{
"name": "llm-api",
"description": "Fully typed chat APIs for OpenAI and Azure's chat models - with token checking and retries",
"version": "1.4.1",
"version": "1.5.0",
"packageManager": "yarn@3.4.1",

@@ -6,0 +6,0 @@ "main": "dist/src/index.js",

@@ -11,2 +11,3 @@ # ✨ LLM API

- [Anthropic](#-anthropic)
- [Groq](#-groq)
- [Amazon Bedrock](#-amazon-bedrock)

@@ -195,3 +196,3 @@ - [Debugging](#-debugging)

Anthropic's models have the unique advantage of a large 100k context window and extremely fast performance. If no explicit model is specified, `llm-api` will default to the `claude-instant-1` model.
Anthropic's models have the unique advantage of a large 100k context window and extremely fast performance. If no explicit model is specified, `llm-api` will default to the Claude Sonnet model.

@@ -202,2 +203,10 @@ ```typescript

## 🔶 Groq
Groq is a new LLM inference provider that provides the fastest inference speed on the market. They currently support Meta's Llama 2 and Mistral's Mixtral models.
```typescript
const groq = new GroqChatApi(params: GroqConfig, config: ModelConfig);
```
## ❖ Amazon Bedrock

@@ -204,0 +213,0 @@

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet