Socket
Book a DemoInstallSign in
Socket

token.js-fork

Package Overview
Dependencies
Maintainers
2
Versions
33
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

token.js-fork

Integrate 9 LLM providers with a single Typescript SDK using OpenAIs format.

latest
Source
npmnpm
Version
0.7.31
Version published
Weekly downloads
502
-41.08%
Maintainers
2
Weekly downloads
 
Created
Source

Token.js

Integrate 200+ LLMs with one TypeScript SDK using OpenAI's format. Free and open source. No proxy server required.

Features

  • Use OpenAI's format to call 200+ LLMs from 10+ providers.
  • Supports tools, JSON outputs, image inputs, streaming, and more.
  • Runs completely on the client side. No proxy server needed.
  • Free and open source under MIT.

Supported Providers

  • AI21
  • Anthropic
  • AWS Bedrock
  • Cohere
  • Gemini
  • Groq
  • Mistral
  • OpenAI
  • Perplexity
  • OpenRouter
  • Any other model provider with an OpenAI compatible API

Documentation

Setup

Installation

npm install token.js

Usage

Import the Token.js client and call the create function with a prompt in OpenAI's format. Specify the model and LLM provider using their respective fields.

OPENAI_API_KEY=<openai api key>
import { TokenJS } from 'token.js'

// Create the Token.js client
const tokenjs = new TokenJS()

async function main() {
  // Create a model response
  const completion = await tokenjs.chat.completions.create({
    // Specify the provider and model
    provider: 'openai',
    model: 'gpt-4o',
    // Define your message
    messages: [
      {
        role: 'user',
        content: 'Hello!',
      },
    ],
  })
  console.log(completion.choices[0])
}
main()

Access Credentials

We recommend using environment variables to configure the credentials for each LLM provider.

# OpenAI
OPENAI_API_KEY=
# AI21
AI21_API_KEY=
# Anthropic
ANTHROPIC_API_KEY=
# Cohere
COHERE_API_KEY=
# Gemini
GEMINI_API_KEY=
# Groq
GROQ_API_KEY=
# Mistral
MISTRAL_API_KEY=
# Perplexity
PERPLEXITY_API_KEY=
# OpenRouter
OPENROUTER_API_KEY=
# AWS Bedrock
AWS_REGION_NAME=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
# OpenAI Compatible
OPENAI_COMPATIBLE_API_KEY=

Streaming

Token.js supports streaming responses for all providers that offer it.

import { TokenJS } from 'token.js'

const tokenjs = new TokenJS()

async function main() {
  const result = await tokenjs.chat.completions.create({
    stream: true,
    provider: 'openai',
    model: 'gpt-4o',
    messages: [
      {
        role: 'user',
        content: `Tell me about yourself.`,
      },
    ],
  })

  for await (const part of result) {
    process.stdout.write(part.choices[0]?.delta?.content || '')
  }
}
main()

Function Calling

Token.js supports the function calling tool for all providers and models that offer it.

import { TokenJS, ChatCompletionTool } from 'token.js'

const tokenjs = new TokenJS()

async function main() {
  const tools: ChatCompletionTool[] = [
    {
      type: 'function',
      function: {
        name: 'get_current_weather',
        description: 'Get the current weather in a given location',
        parameters: {
          type: 'object',
          properties: {
            location: {
              type: 'string',
              description: 'The city and state, e.g. San Francisco, CA',
            },
          },
          required: ['location'],
        },
      },
    },
  ]

  const result = await tokenjs.chat.completions.create({
    provider: 'gemini',
    model: 'gemini-1.5-pro',
    messages: [
      {
        role: 'user',
        content: `What's the weather like in San Francisco?`,
      },
    ],
    tools,
    tool_choice: 'auto',
  })

  console.log(result.choices[0].message.tool_calls)
}
main()

Extending Model Support

Token.js allows you to extend the predefined model list using the extendModelList method. Here are some example scenarios where this is useful:

  • Adding AWS Bedrock models with regional prefixes like us.anthropic.claude-3-sonnet
  • Supporting new model versions before they're added to the predefined list
  • Using custom model deployments with unique names
  • Adding experimental or beta models during testing
import { TokenJS } from 'token.js'

// Example in 2 steps: Adding AWS Bedrock Claude models with region prefix
const tokenjs = new TokenJS();
// Step 1: Register the new model name
tokenjs.extendModelList(
  "bedrock",
  'us.anthropic.claude-3-5-sonnet-20241022-v2:0',
  "anthropic.claude-3-sonnet-20240229-v1:0" // Copy features from existing model
);

// Step 2: Using the extended model in a chat completion
const result = await tokenjs.chat.completions.create({
  stream: true,
  provider: 'bedrock',
  model: 'us.anthropic.claude-3-5-sonnet-20241022-v2:0' as any, // Type casting as 'any' required
  messages: [
    {
      role: 'user',
      content: 'Tell me about yourself.',
    },
  ],
});

Note: When using extended models, type casting (as any) is required

The featureSupport parameter can be either:

  • A string matching an existing model name from the same provider to copy its feature support
  • An object specifying which features the model supports:
    FeatureTypeDescription
    streamingbooleanWhether the model supports streaming responses
    jsonbooleanWhether the model supports JSON mode
    toolCallsbooleanWhether the model supports function calling
    imagesbooleanWhether the model supports image inputs

Feature Compatibility

This table provides an overview of the features that Token.js supports from each LLM provider.

ProviderChat CompletionStreamingFunction Calling ToolJSON OutputImage Input
OpenAI:white_check_mark::white_check_mark::white_check_mark::white_check_mark::white_check_mark:
Anthropic:white_check_mark::white_check_mark::white_check_mark::white_check_mark::white_check_mark:
Bedrock:white_check_mark::white_check_mark::white_check_mark::white_check_mark::white_check_mark:
Mistral:white_check_mark::white_check_mark::white_check_mark::white_check_mark::heavy_minus_sign:
Cohere:white_check_mark::white_check_mark::white_check_mark::heavy_minus_sign::heavy_minus_sign:
AI21:white_check_mark::white_check_mark::heavy_minus_sign::heavy_minus_sign::heavy_minus_sign:
Gemini:white_check_mark::white_check_mark::white_check_mark::white_check_mark::white_check_mark:
Groq:white_check_mark::white_check_mark::heavy_minus_sign::white_check_mark::heavy_minus_sign:
Perplexity:white_check_mark::white_check_mark::heavy_minus_sign::heavy_minus_sign::heavy_minus_sign:
OpenRouter:white_check_mark::white_check_mark::white_check_mark::white_check_mark::white_check_mark:
OpenAI Compatible:white_check_mark::white_check_mark::white_check_mark::white_check_mark::white_check_mark:

Legend

SymbolDescription
:white_check_mark:Supported by Token.js
:heavy_minus_sign:Not supported by the LLM provider, so Token.js cannot support it

Note: Certain LLMs, particularly older or weaker models, do not support some features in this table. For details about these restrictions, see our LLM provider documentation.

Contributing

See our Contributing guide to learn how to contribute to Token.js.

Issues

Please let us know if there's any way that we can improve Token.js by opening an issue!

License

Token.js is free and open source software licensed under MIT.

FAQs

Package last updated on 19 Nov 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts