Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

gpt-tokens

Package Overview
Dependencies
Maintainers
1
Versions
29
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

gpt-tokens

Calculate the token consumption and amount of openai gpt message

  • 1.3.12
  • latest
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
5.3K
decreased by-6.5%
Maintainers
1
Weekly downloads
 
Created
Source

gpt-tokens

TypeScript

GPT tokens / price Calculate

Install

# npm or yarn

npm install gpt-tokens
yarn add gpt-tokens

Support

Basic Models

  • gpt-3.5-turbo
  • gpt-3.5-turbo-16k
  • gpt-4
  • gpt-4-32k
  • gpt-4-turbo-preview
  • gpt-3.5-turbo-0301
  • gpt-3.5-turbo-0613
  • gpt-3.5-turbo-1106
  • gpt-3.5-turbo-0125
  • gpt-3.5-turbo-16k-0613
  • gpt-4-0314
  • gpt-4-0613
  • gpt-4-32k-0314
  • gpt-4-32k-0613
  • gpt-4-1106-preview
  • gpt-4-0125-preview
  • gpt-4-turbo-2024-04-09
  • gpt-4-turbo
  • gpt-4o
  • gpt-4o-2024-05-13
  • gpt-4o-2024-08-06
  • gpt-4o-mini
  • gpt-4o-mini-2024-07-18
  • o1-preview
  • o1-preview-2024-09-12
  • o1-mini
  • o1-mini-2024-09-12
  • chatgpt-4o-latest

Fine Tune Models

  • ft:gpt-3.5-turbo:xxx

Others

  • Fine tune training (Not rigorously tested)
  • Function calling (Not rigorously tested)

Usage

Basic chat messages

import { GPTTokens } from 'gpt-tokens'

const usageInfo = new GPTTokens({
    model   : 'gpt-3.5-turbo-1106',
    messages: [
        { 'role' :'system', 'content': 'You are a helpful, pattern-following assistant that translates corporate jargon into plain English.' },
        { 'role' :'user',   'content': 'This late pivot means we don\'t have time to boil the ocean for the client deliverable.' },
    ]
})

console.info('Used tokens: ', usageInfo.usedTokens)
console.info('Used USD: ',    usageInfo.usedUSD)

Fine tune training

import { GPTTokens } from 'gpt-tokens'

const usageInfo = new GPTTokens({
    model   : 'gpt-3.5-turbo-1106',
    training: {
        data  : fs
                .readFileSync(filepath, 'utf-8')
                .split('\n')
                .filter(Boolean)
                .map(row => JSON.parse(row)),
        epochs: 7,
    },
})

console.info('Used tokens: ', usageInfo.usedTokens)
console.info('Used USD: ',    usageInfo.usedUSD)

Fine tune chat messages

import { GPTTokens } from 'gpt-tokens'

const usageInfo = new GPTTokens({
    fineTuneModel: 'ft:gpt-3.5-turbo-1106:opensftp::8IWeqPit',
    messages     : [
        { role: 'system', content: 'You are a helpful assistant.' },
    ],
})

console.info('Used tokens: ', usageInfo.usedTokens)
console.info('Used USD: ',    usageInfo.usedUSD)

Function calling

import { GPTTokens } from 'gpt-tokens'

const usageInfo = new GPTTokens({
    model   : 'gpt-3.5-turbo-1106',
    messages: [
        { role: 'user', content: 'What\'s the weather like in San Francisco and Paris?' },
    ],
    tools   : [
        {
            type    : 'function',
            function: {
                name       : 'get_current_weather',
                description: 'Get the current weather in a given location',
                parameters : {
                    type      : 'object',
                    properties: {
                        location: {
                            type       : 'string',
                            description: 'The city and state, e.g. San Francisco, CA',
                        },
                        unit    : {
                            type: 'string',
                            enum: ['celsius', 'fahrenheit'],
                        },
                    },
                    required  : ['location'],
                },
            },
        },
    ]
})

console.info('Used tokens: ', usageInfo.usedTokens)
console.info('Used USD: ',    usageInfo.usedUSD)

Calculation method

Basic chat messages

Tokens calculation rules for prompt and completion:

If the role of the last element of messages is not assistant, the entire messages will be regarded as a prompt, and all content will participate in the calculation of tokens

If the role of the last element of messages is assistant, the last message is regarded as the completion returned by openai, and only the 'content' content in the completion participates in the calculation of tokens

Verify the function above in openai-cookbook

openai-cookbook.png

Function calling

Thanks for hmarr

https://hmarr.com/blog/counting-openai-tokens/

Test in your project

node test.js yourOpenAIAPIKey

Dependencies

Keywords

FAQs

Package last updated on 11 Oct 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc