Security News
Input Validation Vulnerabilities Dominate MITRE's 2024 CWE Top 25 List
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Dependency free since 2023!
Hyparquet is a lightweight, pure JavaScript library for parsing Apache Parquet files. Apache Parquet is a popular columnar storage format that is widely used in data engineering, data science, and machine learning applications for efficiently storing and processing large datasets.
Hyparquet allows you to read and extract data from Parquet files directly in JavaScript environments, both in Node.js and in the browser. It is designed to be fast, memory-efficient, and easy to use.
Online parquet file reader demo available at:
https://hyparam.github.io/hyparquet/
Why make a new parquet parser? First, existing libraries like parquetjs are officially "inactive". Importantly, they do not support the kind of stream processing needed to make a really performant parser in the browser. And finally, no dependencies means that hyparquet is lean, and easy to package and deploy.
Install the hyparquet package from npm:
npm install hyparquet
To read the entire contents of a parquet file in a node.js environment:
const { asyncBufferFromFile, parquetRead } = await import('hyparquet')
await parquetRead({
file: await asyncBufferFromFile(filename),
onComplete: data => console.log(data)
})
Hyparquet supports asynchronous fetching of parquet files over a network.
const { asyncBufferFromUrl, parquetRead } = await import('https://cdn.jsdelivr.net/npm/hyparquet/src/hyparquet.min.js')
const url = 'https://hyperparam-public.s3.amazonaws.com/bunnies.parquet'
await parquetRead({
file: await asyncBufferFromUrl(url),
onComplete: data => console.log(data)
})
You can read just the metadata, including schema and data statistics using the parquetMetadata
function:
const { parquetMetadata } = await import('hyparquet')
const fs = await import('fs')
const buffer = fs.readFileSync('example.parquet')
const arrayBuffer = new Uint8Array(buffer).buffer
const metadata = parquetMetadata(arrayBuffer)
If you're in a browser environment, you'll probably get parquet file data from either a drag-and-dropped file from the user, or downloaded from the web.
To load parquet data in the browser from a remote server using fetch
:
import { parquetMetadata } from 'hyparquet'
const res = await fetch(url)
const arrayBuffer = await res.arrayBuffer()
const metadata = parquetMetadata(arrayBuffer)
To parse parquet files from a user drag-and-drop action, see example in index.html.
To read large parquet files, it is recommended that you filter by row and column. Hyparquet is designed to load only the minimal amount of data needed to fulfill a query. You can filter rows by number, or columns by name:
import { parquetRead } from 'hyparquet'
await parquetRead({
file,
columns: ['colA', 'colB'], // include columns colA and colB
rowStart: 100,
rowEnd: 200,
onComplete: data => console.log(data),
})
Hyparquet supports asynchronous fetching of parquet files over a network.
You can provide an AsyncBuffer
which is like a js ArrayBuffer
but the slice
method returns Promise<ArrayBuffer>
.
interface AsyncBuffer {
byteLength: number
slice(start: number, end?: number): Promise<ArrayBuffer>
}
You can read parquet files asynchronously using HTTP Range requests so that only the necessary byte ranges from a url
will be fetched:
import { parquetRead } from 'hyparquet'
const url = 'https://hyperparam-public.s3.amazonaws.com/wiki-en-00000-of-00041.parquet'
const byteLength = 420296449
await parquetRead({
file: { // AsyncBuffer
byteLength,
async slice(start, end) {
const headers = new Headers()
headers.set('Range', `bytes=${start}-${end - 1}`)
const res = await fetch(url, { headers })
return res.arrayBuffer()
},
},
onComplete: data => console.log(data),
})
The parquet format is known to be a sprawling format which includes options for a wide array of compression schemes, encoding types, and data structures.
Supported parquet encodings:
Supporting every possible compression codec available in parquet would blow up the size of the hyparquet library. In practice, most parquet files use snappy compression.
Parquet compression types supported by default:
You can provide custom compression codecs using the compressors
option.
The most common compression codec used in parquet is snappy compression. Hyparquet includes a built-in snappy decompressor written in javascript.
We developed hysnappy to make parquet parsing even faster. Hysnappy is a snappy decompression codec written in C, compiled to WASM.
To use hysnappy for faster parsing of large parquet files, override the SNAPPY
compressor for hyparquet:
import { parquetRead } from 'hyparquet'
import { snappyUncompressor } from 'hysnappy'
await parquetRead({
file,
compressors: {
SNAPPY: snappyUncompressor(),
},
onComplete: console.log,
})
Parsing a 420mb wikipedia parquet file using hysnappy reduces parsing time by 40% (4.1s to 2.3s).
You can include support for ALL parquet compression codecs using the hyparquet-compressors library.
import { parquetRead } from 'hyparquet'
import { compressors } from 'hyparquet-compressors'
await parquetRead({ file, compressors, onComplete: console.log })
Contributions are welcome!
Hyparquet development is supported by an open-source grant from Hugging Face :hugs:
FAQs
parquet file parser for javascript
The npm package hyparquet receives a total of 4,626 weekly downloads. As such, hyparquet popularity was classified as popular.
We found that hyparquet demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.