Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
llama-tokenizer-js
Advanced tools
The first JavaScript tokenizer for LLaMA which works client-side in the browser (and also in Node).
Intended use case is calculating token count accurately on the client-side.
Features:
Option 1: Install as an npm package and import as ES6 module
npm install llama-tokenizer-js
import llamaTokenizer from 'llama-tokenizer-js'
console.log(llamaTokenizer.encode("Hello world!").length)
Option 2: Load as ES6 module with <script>
tags in your HTML
<script type="module" src="https://belladoreai.github.io/llama-tokenizer-js/llama-tokenizer.js"></script>
Once you have the module imported, you can encode or decode with it. Training is not supported.
When used in browser, llama-tokenizer-js pollutes global namespace with llamaTokenizer
.
Encode:
llamaTokenizer.encode("Hello world!")
> [1, 15043, 3186, 29991]
Decode:
llamaTokenizer.decode([1, 15043, 3186, 29991])
> 'Hello world!'
Note that special "beginning of sentence" token and preceding space are added by default when encoded (and correspondingly expected when decoding). These affect token count. There may be some use cases where you don't want to add these. You can pass additional boolean parameters in these use cases. For example, if you want to decode an individual token:
llamaTokenizer.decode([3186], false, false)
> 'Hello'
You can run tests with:
llamaTokenizer.runTests()
The test suite is small, but it covers different edge cases very well.
Note that tests can be run both in browser and in Node (this is necessary because some parts of the code work differently in different environments).
As mentioned, llama-tokenizer-js is the first JavaScript tokenizer for LLaMA which works client-side in the browser. You might be wondering, what are people currently using to count tokens in web applications?
The tokenizer used by LLaMA is a SentencePiece Byte-Pair Encoding tokenizer.
Note that this is a tokenizer for LLaMA models, and it's different than the tokenizers used by OpenAI models. If you need a tokenizer for OpenAI models, I recommend gpt-tokenizer.
What is this tokenizer compatible with? All LLaMA models which have been trained on top of the checkpoints (model weights) leaked by Facebook in early 2023.
Examples of compatible models:
Incompatible LLaMA models are those which have been trained from scratch, not on top of the checkpoints leaked by Facebook. For example, OpenLLaMA models are incompatible.
When you see a new LLaMA model released, this tokenizer is mostly likely compatible with it without any modifications. If you are unsure, try it and see if the token ids are the same (compared to running the model with, for example, oobabooga webui). You can find great test input/output samples by searching for runTests
inside llama-tokenizer.js
.
If you want to modify this library to support a new LLaMA tokenizer (new as in trained from scratch, not using the same tokenizer as most LLaMA models do), you should be able to do so by swapping the vocabulary and merge data (the 2 long variables near the end of llama-tokenizer.js
file). Below is Python code that you can use for this.
# Load the tokenizer.json file that was distributed with the LLaMA model
d = None
with open(r"tokenizer.json", 'r', encoding='utf-8') as f:
d = json.load(f)
# Extract the vocabulary as a list of token strings
vocab = []
for token in d['model']['vocab']:
vocab.append(token)
# Transform the vocabulary into a UTF-8 String delimited by line breaks, base64 encode it, and save to a file
with open('vocab_base64.txt', 'wb') as f:
f.write(base64.b64encode(('\n').join(vocab).encode("utf-8")))
# Extract the merge data as a list of strings, where location in list indicates priority of merge.
# Example: one merge might be "gr a" (indicating that "gr" and "a" merge into "gra")
merges = []
for merge in d['model']['merges']:
merges.append(merge)
# Create helper map where keys are token Strings, values are their positions in the vocab.
# Note that positions of the vocabulary do not have any special meaning in the tokenizer,
# we are merely using them to aid with compressing the data.
vocab_map = {}
for i,v in enumerate(vocab):
vocab_map[v] = i
# Each merge can be represented with 2 integers, e.g. "merge the 5th and the 11th token in vocab".
# Since the vocabulary has fewer than 2^16 entries, each integer can be represented with 16 bits (2 bytes).
# We are going to compress the merge data into a binary format, where
# the first 4 bytes define the first merge, the next 4 bytes define the second merge, and so on.
integers = []
for merge in merges:
f, t = merge.split(" ")
integers.append(vocab_map[f])
integers.append(vocab_map[t])
# Pack the integers into bytes using the 'H' format (2 bytes per integer)
byte_array = struct.pack(f'{len(integers)}H', *integers)
# Save the byte array as base64 encoded file
with open('merges_binary.bin', 'wb') as file:
file.write(base64.b64encode(byte_array))
You are free to use llama-tokenizer-js for basically whatever you want (MIT license).
You are not required to give anything in exchange, but I kindly ask that you give back by linking to https://belladore.ai/tools in an appropriate place in your website. For example, you might link with the text "Using llama-tokenizer-js by belladore.ai" or something similar.
FAQs
JS tokenizer for LLaMA-based LLMs
The npm package llama-tokenizer-js receives a total of 3,975 weekly downloads. As such, llama-tokenizer-js popularity was classified as popular.
We found that llama-tokenizer-js demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.