New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

node-csv-streamer

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

node-csv-streamer

Stream large datasets and export to CSV - streaming, memory-efficient exporter

latest
Source
npmnpm
Version
0.1.2
Version published
Weekly downloads
0
-100%
Maintainers
1
Weekly downloads
 
Created
Source

node-csv-streamer

Stream large datasets efficiently and export to CSV — framework-agnostic and NestJS-friendly.

🚀 Features

  • ✅ Stream data in chunks (pagination-friendly)
  • 🧠 Memory-efficient (uses Node.js streams)
  • 🔄 Supports sync or async data transformation
  • 🧩 Flexible field mapping (supports concatenation like firstName-lastName)
  • 🛠️ Works seamlessly with NestJS, Express, or pure Node.js

Install

npm install node-csv-streamer

🧠 Concept

node-csv-streamer is designed for large dataset exports where loading everything into memory isn’t feasible.

Instead of fetching all data at once, it streams your data in batches using a fetchFn(skip, limit, query) function.

This approach ensures low memory usage and smooth CSV generation — ideal for analytics dashboards, data exports, or reporting systems.

🧩 Fetch Function Pattern

Your data-fetching method must follow this pattern:

async function fetchFn(skip: number, limit: number, query: any): Promise<any[]> {
  // Fetch data from your source
  return await dataSource.find(query).skip(skip).limit(limit); //This is just an example
}

skip → starting point for batch retrieval

limit → batch size (default is 1000)

query → optional filtering parameters

Each batch is streamed and written directly to the CSV file.

Usage

When using node-csv-streamer, the assumption is you are working with a large data set or you want a memory efficient way to get and send data as csv to a writable stream.

To achieve this, your data fetching method must be setup to get data in batches. You can set the batch use the existing 1000.

Your data fetching method should be define in this pattern: fetchFn(skip,limit,query)

The skip is currently used to define how where to start from in fetching data, the limit is the batchSize(1000) to retrieve at each call to your source data location and query is the criteria for selecting the data.

This fetching pattern is used over using a method to gets all the data you need into memory.

Example : Express Controller

import { NodeCsvStream } from "node-csv-streamer";
import express from "express";

const app = express();

app.get("/download-csv", async (req, res) => {
  // Set response headers for CSV download
  res.setHeader("Content-Type", "text/csv");
  res.setHeader(
    "Content-Disposition",
    `attachment; filename="example-csv-file.csv"`
  );

  // Define mapping between CSV headers and your data source fields
  // You can combine multiple fields using a hyphen ("-")
  const csvHeaderMapping = {
    Email: "email",
    Name: "firstName-lastName",
    "Phone Number": "phoneNumber",
  };

  // Stream CSV data directly to the HTTP response
  await NodeCsvStream.download(
    res,
    csvHeaderMapping,
    aggregateEmployeeRecords, // or this.aggregateEmployeeRecords.bind(this)
    {}, // optional query parameters
    undefined, // optional formatting function
    100 // batch size
  );
});

⚙️ API Reference

NodeCsvStream.download(res, fileMapping, fetchFn, query?, docsFormattingFn?, batchSize?)

ParameterTypeDescription
resWritableWritable stream (e.g., Express or NestJS Response object). Typically the HTTP response where the CSV will be streamed directly.
fileMappingRecord<string, string>Defines the mapping between CSV column headers and data source keys. Supports concatenation with a hyphen (e.g., "firstName-lastName").
fetchFn(skip: number, limit: number, query: any) => Promise<any[]>Function responsible for fetching data in batches. It is called repeatedly until no more records are returned.
query (optional)anyQuery object passed to fetchFn for data filtering or scoping.
docsFormattingFn (optional)(doc: any, mapping: Record<string, string>) => anyOptional transformation function to modify each record before converting to CSV.
batchSize (optional)numberNumber of records to fetch per batch. Defaults to 1000.

🧩 Example Usage

await NodeCsvStream.download(
  res,
  {
    Email: "email",
    Name: "firstName-lastName",
    "Phone Number": "phoneNumber",
  },
  aggregateEmployeeRecords, // async function (skip, limit, query)
  { active: true }, // optional query
  undefined, // optional transform function
  500 // optional batch size
);

🪶 License

MIT © 2025 — Maintained by Adeleke Bright

Keywords

stream

FAQs

Package last updated on 26 Oct 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts