Introduction
Data parsing using ES6 Async Iterators
Online documentation
What problem this package solves?
Processing huge files in Node.js
can be hard. Especially when you need execute send or retrieve data from external sources.
This package solves
- Parse big
CSV | XML | JSON
files in memory efficient way. - Write data to
CSV | JSON | XML
file in memory efficient way.
Installation
Async iterators are natively supported in Node.js
10.x. If you're using Node.js
8.x or 9.x, you need to use Node.js
' --harmony_async_iteration
flag.
Async iterators are not supported in Node.js
6.x or 7.x, so if you're on an older version you need to upgrade Node.js
to use async iterators.
$ npm install iterparse
Or using yarn
$ yarn add iterparse
Benchmarks
Run all benchmarks
git clone https://github.com/digimuza/iterparse.git &&
cd ./iterparse/benchmarks &&
yarn &&
yarn run
All benchmarks are executed with on AMD 2600x
processor.
Benchmarks source code here
CSV Parsing
Parsing 1 million records of random generated data.
Data was generated using this. script
csv-parser - 2.8 s
iterparse - 3.4 s
fast-csv - 8.3 s
XML
Parsing 1 million records of random generated data.
Data was generated using this. script
JSON
Parsing 1 million records of random generated data.
Data was generated using this. script
Documentation
General usage
For processing iterators I recommend to use IxJS library
Real world examples
Usage in e-commerce
Big e-shops can have feeds with 100k or more products. load all this data at once is really in practical.
const productCount = 100000;
const productSizeInKb = 20;
const totalMemoryConsumption = productCount * productSizeInKb * 1024;
So base on this calculation we will use 2gb of memory just to load data when we start working with data memory footprint will grow 6, 10 times.
We can use node streams to solve this problem, but working with streams is kinda mind bending and really hard especially when you need manipulate data in meaningfully way and send data to external source api
machine learning network
database
ect.
Some examples what we what we can do with iterparse
import { AsyncIterable } from 'ix';
import { xmlRead, jsonWrite } from 'iterparse'
interface Video {
id: string,
url: string,
description: string
}
async function getListOfYouTubeVideos(url: string): Promise<Video[]> {
...
return {...}
}
xmlRead<Video>({ filePath: "./big_product_feed.xml", pattern: 'product' })
.map(async ({ url })=>{
return getListOfYouTubeVideos(url)
})
.pipe(jsonWrite({ filePath: "./small_feed_with_videos.json" }))
.count()
Keep in mind this is trivial example but it illustrates how to process huge amounts of data.
Simple csv to json converter.
import { csvRead, jsonWrite } from "iterparse";
csvRead({ filePath: "./big_csv_file.csv" })
.pipe(jsonWrite({ filePath: "big_json_file.json" }))
.count();
Data aggregation
import { csvRead, jsonWrite } from "iterparse";
csvRead<{ id: string, price: number, qty: number, margin: number }>({ filePath: "./sales.csv" })
.reduce((acc, item)=> acc + ((item.qty * item.price) * item.margin), 0)
.then((profit) => {
console.log(`Yearly profit ${profit}$`)
});
import fetch from 'node-fetch'
import { jsonWrite } from './json'
async function* extractBreweries() {
let page = 0
while (true) {
const url = `https://api.openbrewerydb.org/breweries?page=${page}`
console.log(`Extracting: "${url}"`)
const response = await fetch(`https://api.openbrewerydb.org/breweries?page=${page}`)
if (!response.ok) {
throw new Error(`Failed get ${url}`)
}
const body = await response.json()
if (Array.isArray(body) && body.length !== 0) {
for (const item of body) {
yield item
}
page++
continue
}
return
}
}
jsonWrite(extractBreweries(), { filePath: 'breweries.json' }).count()