Research
Security News
Threat Actor Exposes Playbook for Exploiting npm to Build Blockchain-Powered Botnets
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
The Ollama JavaScript library provides the easiest way to integrate your JavaScript project with Ollama.
npm i ollama
import ollama from 'ollama'
const response = await ollama.chat({
model: 'llama3.1',
messages: [{ role: 'user', content: 'Why is the sky blue?' }],
})
console.log(response.message.content)
To use the library without node, import the browser module.
import ollama from 'ollama/browser'
Response streaming can be enabled by setting stream: true
, modifying function calls to return an AsyncGenerator
where each part is an object in the stream.
import ollama from 'ollama'
const message = { role: 'user', content: 'Why is the sky blue?' }
const response = await ollama.chat({ model: 'llama3.1', messages: [message], stream: true })
for await (const part of response) {
process.stdout.write(part.message.content)
}
import ollama from 'ollama'
const modelfile = `
FROM llama3.1
SYSTEM "You are mario from super mario bros."
`
await ollama.create({ model: 'example', modelfile: modelfile })
The Ollama JavaScript library's API is designed around the Ollama REST API
ollama.chat(request)
request
<Object>
: The request object containing chat parameters.
model
<string>
The name of the model to use for the chat.messages
<Message[]>
: Array of message objects representing the chat history.
role
<string>
: The role of the message sender ('user', 'system', or 'assistant').content
<string>
: The content of the message.images
<Uint8Array[] | string[]>
: (Optional) Images to be included in the message, either as Uint8Array or base64 encoded strings.format
<string>
: (Optional) Set the expected format of the response (json
).stream
<boolean>
: (Optional) When true an AsyncGenerator
is returned.keep_alive
<string | number>
: (Optional) How long to keep the model loaded.tools
<Tool[]>
: (Optional) A list of tool calls the model may make.options
<Options>
: (Optional) Options to configure the runtime.Returns: <ChatResponse>
ollama.generate(request)
request
<Object>
: The request object containing generate parameters.
model
<string>
The name of the model to use for the chat.prompt
<string>
: The prompt to send to the model.suffix
<string>
: (Optional) Suffix is the text that comes after the inserted text.system
<string>
: (Optional) Override the model system prompt.template
<string>
: (Optional) Override the model template.raw
<boolean>
: (Optional) Bypass the prompt template and pass the prompt directly to the model.images
<Uint8Array[] | string[]>
: (Optional) Images to be included, either as Uint8Array or base64 encoded strings.format
<string>
: (Optional) Set the expected format of the response (json
).stream
<boolean>
: (Optional) When true an AsyncGenerator
is returned.keep_alive
<string | number>
: (Optional) How long to keep the model loaded.options
<Options>
: (Optional) Options to configure the runtime.<GenerateResponse>
ollama.pull(request)
request
<Object>
: The request object containing pull parameters.
model
<string>
The name of the model to pull.insecure
<boolean>
: (Optional) Pull from servers whose identity cannot be verified.stream
<boolean>
: (Optional) When true an AsyncGenerator
is returned.<ProgressResponse>
ollama.push(request)
request
<Object>
: The request object containing push parameters.
model
<string>
The name of the model to push.insecure
<boolean>
: (Optional) Push to servers whose identity cannot be verified.stream
<boolean>
: (Optional) When true an AsyncGenerator
is returned.<ProgressResponse>
ollama.create(request)
request
<Object>
: The request object containing create parameters.
model
<string>
The name of the model to create.path
<string>
: (Optional) The path to the Modelfile of the model to create.modelfile
<string>
: (Optional) The content of the Modelfile to create.stream
<boolean>
: (Optional) When true an AsyncGenerator
is returned.<ProgressResponse>
ollama.delete(request)
request
<Object>
: The request object containing delete parameters.
model
<string>
The name of the model to delete.<StatusResponse>
ollama.copy(request)
request
<Object>
: The request object containing copy parameters.
source
<string>
The name of the model to copy from.destination
<string>
The name of the model to copy to.<StatusResponse>
ollama.list()
<ListResponse>
ollama.show(request)
request
<Object>
: The request object containing show parameters.
model
<string>
The name of the model to show.system
<string>
: (Optional) Override the model system prompt returned.template
<string>
: (Optional) Override the model template returned.options
<Options>
: (Optional) Options to configure the runtime.<ShowResponse>
ollama.embed(request)
request
<Object>
: The request object containing embedding parameters.
model
<string>
The name of the model used to generate the embeddings.input
<string> | <string[]>
: The input used to generate the embeddings.truncate
<boolean>
: (Optional) Truncate the input to fit the maximum context length supported by the model.keep_alive
<string | number>
: (Optional) How long to keep the model loaded.options
<Options>
: (Optional) Options to configure the runtime.<EmbedResponse>
ollama.ps()
<ListResponse>
A custom client can be created with the following fields:
host
<string>
: (Optional) The Ollama host address. Default: "http://127.0.0.1:11434"
.fetch
<Object>
: (Optional) The fetch library used to make requests to the Ollama host.import { Ollama } from 'ollama'
const ollama = new Ollama({ host: 'http://127.0.0.1:11434' })
const response = await ollama.chat({
model: 'llama3.1',
messages: [{ role: 'user', content: 'Why is the sky blue?' }],
})
To build the project files run:
npm run build
FAQs
Ollama Javascript library
The npm package ollama receives a total of 73,831 weekly downloads. As such, ollama popularity was classified as popular.
We found that ollama demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
Security News
NVD’s backlog surpasses 20,000 CVEs as analysis slows and NIST announces new system updates to address ongoing delays.
Security News
Research
A malicious npm package disguised as a WhatsApp client is exploiting authentication flows with a remote kill switch to exfiltrate data and destroy files.