
Product
A Fresh Look for the Socket Dashboard
We’ve redesigned the Socket dashboard with simpler navigation, less visual clutter, and a cleaner UI that highlights what really matters.
The chardet npm package is a character encoding detector library, which allows you to determine the encoding of a given piece of text or a file. It is based on the character detection component of the ICU (International Components for Unicode) project and can be useful when dealing with text data that does not have encoding information.
Detecting encoding of a text buffer
This code reads a file and uses chardet to detect the encoding of its content. The 'detect' function takes a buffer and returns the name of the encoding it believes the text is in.
const chardet = require('chardet');
const fs = require('fs');
fs.readFile('/path/to/file', (err, data) => {
if (err) throw err;
const encoding = chardet.detect(data);
console.log(encoding);
});
Detecting encoding with confidence
This code creates a buffer from a string and uses chardet's 'detectAll' function to get an array of possible encodings along with their confidence scores.
const chardet = require('chardet');
const buffer = Buffer.from('Some text with unknown encoding');
const result = chardet.detectAll(buffer);
console.log(result);
Detecting encoding of a file stream
This code creates a read stream from a file and uses chardet's 'detectStream' function to detect the encoding of the streamed content asynchronously.
const chardet = require('chardet');
const fs = require('fs');
const stream = fs.createReadStream('/path/to/file');
chardet.detectStream(stream).then(encoding => {
console.log(encoding);
});
iconv-lite is a character encoding conversion library. Unlike chardet, which detects the encoding, iconv-lite is used to convert from one encoding to another. It supports many encodings and is often used in conjunction with chardet to first detect the encoding and then convert the text.
jschardet is a port of the python library chardet. It serves the same purpose as the chardet npm package, which is to detect the character encoding of text. The main difference may be in the implementation details and the specific encodings supported by each library.
The encoding npm package is another library for encoding and decoding text. It provides a simpler API for converting between encodings but does not have the detection capabilities of chardet. It's often used when the encoding is already known.
Chardet is a character detection module written in pure JavaScript (TypeScript). Module uses occurrence analysis to determine the most probable encoding.
npm i chardet
To return the encoding with the highest confidence:
import chardet from 'chardet';
const encoding = chardet.detect(Buffer.from('hello there!'));
// or
const encoding = await chardet.detectFile('/path/to/file');
// or
const encoding = chardet.detectFileSync('/path/to/file');
To return the full list of possible encodings use analyse
method.
import chardet from 'chardet';
chardet.analyse(Buffer.from('hello there!'));
Returned value is an array of objects sorted by confidence value in descending order
[
{ confidence: 90, name: 'UTF-8' },
{ confidence: 20, name: 'windows-1252', lang: 'fr' },
];
In browser, you can use Uint8Array instead of the Buffer
:
import chardet from 'chardet';
chardet.analyse(new Uint8Array([0x68, 0x65, 0x6c, 0x6c, 0x6f]));
Sometimes, when data set is huge and you want to optimize performance (with a trade off of less accuracy), you can sample only the first N bytes of the buffer:
const encoding = await chardet.detectFile('/path/to/file', { sampleSize: 32 });
You can also specify where to begin reading from in the buffer:
const encoding = await chardet.detectFile('/path/to/file', {
sampleSize: 32,
offset: 128,
});
In both Node.js and browsers, all strings in memory are represented in UTF-16 encoding. This is a fundamental aspect of the JavaScript language specification. Therefore, you cannot use plain strings directly as input for chardet.analyse()
or chardet.detect()
. Instead, you need the original string data in the form of a Buffer or Uint8Array.
In other words, if you receive a piece of data over the network and want to detect its encoding, use the original data payload, not its string representation. By the time you convert data to a string, it will be in UTF-16 encoding.
Note on TextEncoder: By default, it returns a UTF-8 encoded buffer, which means the buffer will not be in the original encoding of the string.
Currently only these encodings are supported.
Yes. Type definitions are included.
FAQs
Character encoding detector
The npm package chardet receives a total of 29,567,144 weekly downloads. As such, chardet popularity was classified as popular.
We found that chardet demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
We’ve redesigned the Socket dashboard with simpler navigation, less visual clutter, and a cleaner UI that highlights what really matters.
Industry Insights
Terry O’Daniel, Head of Security at Amplitude, shares insights on building high-impact security teams, aligning with engineering, and why AI gives defenders a fighting chance.
Security News
MCP spec updated with structured tool output, stronger OAuth 2.1 security, resource indicators, and protocol cleanups for safer, more reliable AI workflows.