🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more
Socket
Sign inDemoInstall
Socket

node-llama-cpp

Package Overview
Dependencies
Maintainers
1
Versions
112
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level

3.0.0-beta.26
Source
npm
Version published
Weekly downloads
6.1K
-2.76%
Maintainers
1
Weekly downloads
 
Created
Source
node-llama-cpp Logo

node-llama-cpp

Run AI models locally on your machine

Pre-built bindings are provided with a fallback to building from source with cmake

Build License Types Version

✨ New! Try the beta of version 3.0.0 ✨ (included: function calling, automatic chat wrapper detection, embedding support, and more)

Features

  • Run a text generation model locally on your machine
  • Metal, CUDA and Vulkan support
  • Pre-built binaries are provided, with a fallback to building from source without node-gyp or Python
  • Chat with a model using a chat wrapper
  • Use the CLI to chat with a model without writing any code
  • Up-to-date with the latest version of llama.cpp. Download and compile the latest release with a single CLI command.
  • Force a model to generate output in a parseable format, like JSON, or even force it to follow a specific JSON schema

Documentation

Installation

npm install --save node-llama-cpp

This package comes with pre-built binaries for macOS, Linux and Windows.

If binaries are not available for your platform, it'll fallback to download the latest version of llama.cpp and build it from source with cmake. To disable this behavior set the environment variable NODE_LLAMA_CPP_SKIP_DOWNLOAD to true.

Usage

import {fileURLToPath} from "url";
import path from "path";
import {getLlama, LlamaChatSession} from "node-llama-cpp";

const __dirname = path.dirname(fileURLToPath(import.meta.url));

const llama = await getLlama();
const model = await llama.loadModel({
    modelPath: path.join(__dirname, "models", "dolphin-2.1-mistral-7b.Q4_K_M.gguf")
});
const context = await model.createContext();
const session = new LlamaChatSession({
    contextSequence: context.getSequence()
});


const q1 = "Hi there, how are you?";
console.log("User: " + q1);

const a1 = await session.prompt(q1);
console.log("AI: " + a1);


const q2 = "Summarize what you said";
console.log("User: " + q2);

const a2 = await session.prompt(q2);
console.log("AI: " + a2);

For more examples, see the getting started guide

Contributing

To contribute to node-llama-cpp read the contribution guide.

Acknowledgements


Star please

If you like this repo, star it ✨                                                    

Keywords

llama

FAQs

Package last updated on 11 Jun 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts