Socket
Socket
Sign inDemoInstall

csv

Package Overview
Dependencies
Maintainers
1
Versions
99
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

csv - npm Package Compare versions

Comparing version 0.2.6 to 0.2.7

lib/index.js

8

doc/columns.md

@@ -28,3 +28,3 @@ ---

When working with fields, the `transform` method and the `data`
events receive their `data` parameter as an object instead of an
events receive their `row` parameter as an object instead of an
array where the keys are the field names.

@@ -43,5 +43,5 @@

})
.transform(function(data){
data.name = data.firstname + ' ' + data.lastname
return data;
.transform(function(row){
row.name = row.firstname + ' ' + row.lastname
return row;
});

@@ -48,0 +48,0 @@

@@ -5,3 +5,3 @@ ---

title: "Reading data from a source"
date: 2012-11-05T20:57:04.723Z
date: 2013-01-05T06:10:44.660Z
comments: false

@@ -8,0 +8,0 @@ sharing: false

@@ -5,3 +5,3 @@ ---

title: "Node CSV"
date: 2012-11-16T16:02:50.333Z
date: 2013-01-05T06:10:44.661Z
comments: false

@@ -16,3 +16,3 @@ sharing: false

This project provides CSV parsing and has been tested and used
on a large input file (over 2Gb).
on large input files.

@@ -61,3 +61,3 @@ * Follow the Node.js streaming API

2. Direct output to a file path
3. Transform the data (optional)
3. Transform each row (optional)
4. Listen to events (optional)

@@ -72,8 +72,8 @@

.to.path(__dirname+'/sample.out')
.transform( function(data){
data.unshift(data.pop());
return data;
.transform( function(row){
row.unshift(row.pop());
return row;
})
.on('record', function(data,index){
console.log('#'+index+' '+JSON.stringify(data));
.on('record', function(row,index){
console.log('#'+index+' '+JSON.stringify(row));
})

@@ -148,3 +148,3 @@ .on('end', function(count){

be called if transform return `null` since the record is skipped.
The callback provides two arguments. `data` is the CSV line being processed (an array or an object)
The callback provides two arguments. `row` is the CSV line being processed (an array or an object)
and `index` is the index number of the line starting at zero

@@ -151,0 +151,0 @@ * *data*

@@ -5,3 +5,3 @@ ---

title: "Parsing"
date: 2012-10-09T16:24:28.047Z
date: 2013-01-05T06:10:44.660Z
comments: false

@@ -8,0 +8,0 @@ sharing: false

@@ -5,3 +5,3 @@ ---

title: "Stringifier"
date: 2012-11-19T14:18:05.878Z
date: 2013-01-05T06:10:44.661Z
comments: false

@@ -8,0 +8,0 @@ sharing: false

@@ -5,3 +5,3 @@ ---

title: "Writing data to a destination"
date: 2012-11-16T16:44:20.771Z
date: 2013-01-27T18:19:43.154Z
comments: false

@@ -78,2 +78,3 @@ sharing: false

* `end` Prevent calling `end` on the destination, so that destination is no longer writable.
* `eof` Add a linebreak on the last line, default to false, expect a charactere or use '\n' if value is set to "true"

@@ -94,4 +95,4 @@ The end options is similar to passing `{end: false}` option in <a name="stream.pipe"></a>

csv()
.from( '"1","2","3","4"\n"a","b","c","d"' )
.to( function(data, count){} )
.from( '"1","2","3"\n"a","b","c"' )
.to.string( function(data, count){} )

@@ -101,3 +102,3 @@ ```

Callback is called with 2 arguments:
* data Stringify CSV string
* data Entire CSV as a string
* count Number of stringified records

@@ -123,1 +124,20 @@

<a name="to.array"></a>
`to.array(path, [options])`
--------------------------
Provide the output string to a callback.
```javascript
csv()
.from( '"1","2","3"\n"a","b","c"' )
.to.array( function(data, count){} )
```
Callback is called with 2 arguments:
* data Entire CSV as an array of records
* count Number of stringified records

@@ -5,3 +5,3 @@ ---

title: "Transforming data"
date: 2012-10-18T13:11:44.066Z
date: 2013-01-05T06:10:44.660Z
comments: false

@@ -15,3 +15,3 @@ sharing: false

Transformation may occur synchronously or asynchronously dependending
Transformations may occur synchronously or asynchronously depending
on the provided transform callback and its declared arguments length.

@@ -21,3 +21,3 @@

* *data*
* *row*
CSV record

@@ -29,8 +29,8 @@ * *index*

Unless you specify the `columns` read option, `data` are provided
as arrays, otherwise they are objects with keys matching columns
names.
Unless you specify the `columns` read option, the `row` argument will be
provided as an array, otherwise it will be provided as an object with keys
matching columns names.
In synchronous mode, the contract is quite simple, you receive an array
of fields for each record and return the transformed record.
In synchronous mode, the contract is quite simple, you will receive an array
of fields for each record and the transformed array should be returned.

@@ -59,4 +59,4 @@ In asynchronous mode, it is your responsibility to call the callback

.to(console.log)
.transform(function(data, index){
return data.reverse()
.transform(function(row, index){
return row.reverse()
});

@@ -75,5 +75,5 @@ // Executing `node samples/transform.js`, print:

.to(console.log)
.transform(function(data, index, callback){
.transform(function(row, index, callback){
process.nextTick(function(){
callback(null, data.reverse());
callback(null, row.reverse());
});

@@ -93,4 +93,4 @@ });

.to(console.log)
.transform(function(data, index){
return (index>0 ? ',' : '') + data[0] + ":" + data[2] + ' ' + data[1];
.transform(function(row, index){
return (index>0 ? ',' : '') + row[0] + ":" + row[2] + ' ' + row[1];
});

@@ -97,0 +97,0 @@ // Executing `node samples/transform.js`, print:

csv = require('./lib/csv');
csv = require('./lib');
csv.generator = require('./lib/generator');
module.exports = csv;

@@ -1,3 +0,3 @@

// Generated by CoffeeScript 1.3.3
var convert_anchor, convert_code, date, docs, each, fs, getindent, glob, mecano, unindent;
// Generated by CoffeeScript 1.4.0
var convert_anchor, convert_code, date, docs, each, fs, getindent, mecano, unindent;

@@ -10,4 +10,2 @@ fs = require('fs');

glob = require('glob');
date = function() {

@@ -76,3 +74,3 @@ var d;

docs = ['csv', 'from', 'to', 'transformer', 'parser', 'stringifier'];
docs = ['index', 'from', 'to', 'transformer', 'parser', 'stringifier'];

@@ -82,3 +80,3 @@ each(docs).parallel(true).on('item', function(file, next) {

source = "" + __dirname + "/" + file + ".coffee";
destination = "" + __dirname + "/../doc/" + (file === 'csv' ? 'index' : file) + ".md";
destination = "" + __dirname + "/../doc/" + file + ".md";
return fs.readFile(source, 'ascii', function(err, text) {

@@ -118,16 +116,14 @@ var content, match, re, re_title, title;

}
return glob("" + __dirname + "/../doc/*.md", function(err, docs) {
return each(docs).on('item', function(file, next) {
return mecano.copy({
source: file,
destination: destination,
force: true
}, next);
}).on('both', function(err) {
if (err) {
return console.error(err);
}
return console.log("Documentation published: " + destination);
});
return each().files("" + __dirname + "/../doc/*.md").on('item', function(file, next) {
return mecano.copy({
source: file,
destination: destination,
force: true
}, next);
}).on('both', function(err) {
if (err) {
return console.error(err);
}
return console.log("Documentation published: " + destination);
});
});

@@ -1,2 +0,2 @@

// Generated by CoffeeScript 1.3.3
// Generated by CoffeeScript 1.4.0
var Stream, fs, path, utils, _ref;

@@ -106,11 +106,12 @@

* `delimiter` Set the field delimiter, one character only, defaults to comma.
* `quote` Set the field delimiter, one character only, defaults to double quotes.
* `escape` Set the field delimiter, one character only, defaults to double quotes.
* `columns` List of fields or true if autodiscovered in the first CSV line, default to null. Impact the `transform` argument and the `data` event by providing an object instead of an array, order matters, see the transform and the columns sections for more details.
* `flags` Used to read a file stream, default to the r charactere.
* `encoding` Encoding of the read stream, defaults to 'utf8', applied when a readable stream is created.
* `trim` If true, ignore whitespace immediately around the delimiter, defaults to false.
* `ltrim` If true, ignore whitespace immediately following the delimiter (i.e. left-trim all fields), defaults to false.
* `rtrim` If true, ignore whitespace immediately preceding the delimiter (i.e. right-trim all fields), defaults to false.
* `delimiter` Set the field delimiter, one character only, defaults to comma.
* `rowDelimiter` String used to delimit record rows or a special value; special values are 'auto', 'unix', 'mac', 'windows', 'unicode'; defaults to 'auto' (discovered in source or 'unix' if no source is specified).
* `quote` Optionnal character surrounding a field, one character only, defaults to double quotes.
* `escape` Set the escape character, one character only, defaults to double quotes.
* `columns` List of fields or true if autodiscovered in the first CSV line, default to null. Impact the `transform` argument and the `data` event by providing an object instead of an array, order matters, see the transform and the columns sections for more details.
* `flags` Used to read a file stream, default to the r charactere.
* `encoding` Encoding of the read stream, defaults to 'utf8', applied when a readable stream is created.
* `trim` If true, ignore whitespace immediately around the delimiter, defaults to false.
* `ltrim` If true, ignore whitespace immediately following the delimiter (i.e. left-trim all fields), defaults to false.
* `rtrim` If true, ignore whitespace immediately preceding the delimiter (i.e. right-trim all fields), defaults to false.

@@ -117,0 +118,0 @@ Additionnaly, in case you are working with stream, you can pass all

@@ -1,2 +0,2 @@

// Generated by CoffeeScript 1.3.3
// Generated by CoffeeScript 1.4.0
var Generator, Stream, util;

@@ -3,0 +3,0 @@

@@ -1,2 +0,2 @@

// Generated by CoffeeScript 1.3.3
// Generated by CoffeeScript 1.4.0
/*

@@ -3,0 +3,0 @@ Input and output options

@@ -1,2 +0,2 @@

// Generated by CoffeeScript 1.3.3
// Generated by CoffeeScript 1.4.0
var EventEmitter, Parser;

@@ -26,5 +26,9 @@

this.state = csv.state;
this.quoted = false;
this.commented = false;
this.lines = 0;
this.buf = '';
this.quoting = false;
this.field = '';
this.lastC = '';
this.nextChar = null;
this.line = [];
return this;

@@ -37,3 +41,3 @@ };

`parse(chars)`
`write(chars)`
--------------

@@ -47,7 +51,8 @@

Parser.prototype.parse = function(chars) {
var c, csv, escapeIsQuote, i, isEscape, isQuote, isReallyEscaped, l, nextChar;
Parser.prototype.write = function(chars, end) {
var areNextCharsRowDelimiters, char, csv, delimLength, escapeIsQuote, i, isEscape, isQuote, isReallyEscaped, l, _results;
csv = this.csv;
chars = '' + chars;
chars = this.buf + chars;
l = chars.length;
delimLength = this.options.rowDelimiter ? this.options.rowDelimiter.length : 0;
i = 0;

@@ -58,104 +63,97 @@ if (this.lines === 0 && csv.options.from.encoding === 'utf8' && 0xFEFF === chars.charCodeAt(0)) {

while (i < l) {
c = chars.charAt(i);
switch (c) {
case this.options.escape:
case this.options.quote:
if (this.commented) {
break;
if ((i + delimLength >= l && chars.substr(i, this.options.rowDelimiter.length) !== this.options.rowDelimiter) && !end) {
break;
}
char = this.nextChar ? this.nextChar : chars.charAt(i);
this.nextChar = chars.charAt(i + 1);
if (!(this.options.rowDelimiter != null) && (this.nextChar === '\n' || this.nextChar === '\r')) {
this.options.rowDelimiter = this.nextChar;
if (this.nextChar === '\r' && chars.charAt(i + 2) === '\n') {
this.options.rowDelimiter += '\n';
}
delimLength = this.options.rowDelimiter.length;
}
if (char === this.options.escape || char === this.options.quote) {
isReallyEscaped = false;
if (char === this.options.escape) {
escapeIsQuote = this.options.escape === this.options.quote;
isEscape = this.nextChar === this.options.escape;
isQuote = this.nextChar === this.options.quote;
if (!(escapeIsQuote && !this.field && !this.quoting) && (isEscape || isQuote)) {
i++;
isReallyEscaped = true;
char = this.nextChar;
this.nextChar = chars.charAt(i + 1);
this.field += char;
}
isReallyEscaped = false;
if (c === this.options.escape) {
nextChar = chars.charAt(i + 1);
escapeIsQuote = this.options.escape === this.options.quote;
isEscape = nextChar === this.options.escape;
isQuote = nextChar === this.options.quote;
if (!(escapeIsQuote && !this.state.field && !this.quoted) && (isEscape || isQuote)) {
i++;
isReallyEscaped = true;
c = chars.charAt(i);
this.state.field += c;
}
if (!isReallyEscaped && char === this.options.quote) {
if (this.quoting) {
areNextCharsRowDelimiters = this.options.rowDelimiter && chars.substr(i + 1, this.options.rowDelimiter.length) === this.options.rowDelimiter;
if (this.nextChar && !areNextCharsRowDelimiters && this.nextChar !== this.options.delimiter) {
return this.error(new Error("Invalid closing quote at line " + (this.lines + 1) + "; found " + (JSON.stringify(this.nextChar)) + " instead of delimiter " + (JSON.stringify(this.options.delimiter))));
}
}
if (!isReallyEscaped && c === this.options.quote) {
if (this.state.field && !this.quoted) {
this.state.field += c;
break;
}
if (this.quoted) {
nextChar = chars.charAt(i + 1);
if (nextChar && nextChar !== '\r' && nextChar !== '\n' && nextChar !== this.options.delimiter) {
return this.error(new Error("Invalid closing quote at line " + (this.lines + 1) + "; found " + (JSON.stringify(nextChar)) + " instead of delimiter " + (JSON.stringify(this.options.delimiter))));
}
this.quoted = false;
} else if (this.state.field === '') {
this.quoted = true;
}
}
break;
case this.options.delimiter:
if (this.commented) {
break;
}
if (this.quoted) {
this.state.field += c;
this.quoting = false;
} else if (this.field) {
this.field += char;
} else {
if (this.options.trim || this.options.rtrim) {
this.state.field = this.state.field.trimRight();
}
this.state.line.push(this.state.field);
this.state.field = '';
this.quoting = true;
}
break;
case '\n':
case '\r':
if (this.quoted) {
this.state.field += c;
break;
}
if (!this.options.quoted && this.state.lastC === '\r') {
break;
}
this.lines++;
if (csv.options.to.lineBreaks === null) {
csv.options.to.lineBreaks = c + (c === '\r' && chars.charAt(i + 1) === '\n' ? '\n' : '');
}
if (this.options.trim || this.options.rtrim) {
this.state.field = this.state.field.trimRight();
}
this.state.line.push(this.state.field);
this.state.field = '';
this.emit('row', this.state.line);
this.state.line = [];
break;
case ' ':
case '\t':
if (this.quoted || (!this.options.trim && !this.options.ltrim) || this.state.field) {
this.state.field += c;
break;
}
break;
default:
if (this.commented) {
break;
}
this.state.field += c;
}
} else if (this.quoting) {
this.field += char;
} else if (char === this.options.delimiter) {
if (this.options.trim || this.options.rtrim) {
this.field = this.field.trimRight();
}
this.line.push(this.field);
this.field = '';
} else if (this.options.rowDelimiter && chars.substr(i, this.options.rowDelimiter.length) === this.options.rowDelimiter) {
this.lines++;
if (this.options.trim || this.options.rtrim) {
this.field = this.field.trimRight();
}
this.line.push(this.field);
this.field = '';
this.emit('row', this.line);
this.line = [];
this.lastC = char;
i += this.options.rowDelimiter.length;
this.nextChar = chars.charAt(i);
continue;
} else if (char === ' ' || char === '\t') {
if (!this.options.trim && !this.options.ltrim) {
this.field += char;
}
} else {
this.field += char;
}
this.state.lastC = c;
this.lastC = char;
i++;
}
this.buf = '';
_results = [];
while (i < l) {
this.nextChar = chars.charAt(i);
this.nextChar = null;
this.buf += chars.charAt(i);
_results.push(i++);
}
return _results;
};
Parser.prototype.end = function() {
if (this.quoted) {
this.write('', true);
if (this.quoting) {
return this.error(new Error("Quoted field not terminated at line " + (this.lines + 1)));
}
if (this.state.field || this.state.lastC === this.options.delimiter || this.state.lastC === this.options.quote) {
if (this.field || this.lastC === this.options.delimiter || this.lastC === this.options.quote) {
if (this.options.trim || this.options.rtrim) {
this.state.field = this.state.field.trimRight();
this.field = this.field.trimRight();
}
this.state.line.push(this.state.field);
this.state.field = '';
this.line.push(this.field);
this.field = '';
}
if (this.state.line.length > 0) {
this.emit('row', this.state.line);
if (this.line.length > 0) {
this.emit('row', this.line);
}

@@ -162,0 +160,0 @@ return this.emit('end', null);

@@ -1,2 +0,2 @@

// Generated by CoffeeScript 1.3.3
// Generated by CoffeeScript 1.4.0

@@ -6,5 +6,3 @@ module.exports = function() {

count: 0,
field: '',
line: [],
lastC: '',
countWriten: 0,

@@ -11,0 +9,0 @@ transforming: 0

@@ -1,2 +0,2 @@

// Generated by CoffeeScript 1.3.3
// Generated by CoffeeScript 1.4.0
/*

@@ -39,2 +39,5 @@

}
if (typeof line === 'number') {
line = "" + line;
}
this.csv.emit('data', line);

@@ -86,3 +89,3 @@ if (!preserve) {

if (Array.isArray(line)) {
newLine = this.csv.state.countWriten ? this.csv.options.to.lineBreaks || "\n" : '';
newLine = this.csv.state.countWriten ? this.csv.options.to.rowDelimiter || "\n" : '';
for (i = _j = 0, _ref1 = line.length; 0 <= _ref1 ? _j < _ref1 : _j > _ref1; i = 0 <= _ref1 ? ++_j : --_j) {

@@ -89,0 +92,0 @@ field = line[i];

@@ -1,2 +0,2 @@

// Generated by CoffeeScript 1.3.3
// Generated by CoffeeScript 1.4.0
var Stream, fs, utils;

@@ -109,2 +109,24 @@

utils.merge(csv.options.to, options);
if (csv.options.to.lineBreaks) {
console.log('To options linebreaks is replaced by rowDelimiter');
if (!csv.options.to.rowDelimiter) {
csv.options.to.rowDelimiter = csv.options.to.lineBreaks;
}
}
switch (csv.options.to.rowDelimiter) {
case 'auto':
csv.options.to.rowDelimiter = null;
break;
case 'unix':
csv.options.to.rowDelimiter = "\n";
break;
case 'mac':
csv.options.to.rowDelimiter = "\r";
break;
case 'windows':
csv.options.to.rowDelimiter = "\r\n";
break;
case 'unicode':
csv.options.to.rowDelimiter = "\u2028";
}
return csv;

@@ -127,3 +149,3 @@ } else {

Callback is called with 2 arguments:
* data Stringify CSV string
* data Entire CSV as a string
* count Number of stringified records

@@ -159,18 +181,2 @@ */

this.options(options);
switch (csv.options.to.lineBreaks) {
case 'auto':
csv.options.to.lineBreaks = null;
break;
case 'unix':
csv.options.to.lineBreaks = "\n";
break;
case 'mac':
csv.options.to.lineBreaks = "\r";
break;
case 'windows':
csv.options.to.lineBreaks = "\r\n";
break;
case 'unicode':
csv.options.to.lineBreaks = "\u2028";
}
csv.pipe(stream, csv.options.to);

@@ -217,3 +223,3 @@ stream.on('error', function(e) {

Callback is called with 2 arguments:
* data Stringify CSV string
* data Entire CSV as an array of records
* count Number of stringified records

@@ -220,0 +226,0 @@ */

@@ -1,2 +0,2 @@

// Generated by CoffeeScript 1.3.3
// Generated by CoffeeScript 1.4.0
var Transformer, stream;

@@ -10,3 +10,3 @@

Transformation may occur synchronously or asynchronously dependending
Transformations may occur synchronously or asynchronously depending
on the provided transform callback and its declared arguments length.

@@ -16,3 +16,3 @@

* *data*
* *row*
CSV record

@@ -24,8 +24,8 @@ * *index*

Unless you specify the `columns` read option, `data` are provided
as arrays, otherwise they are objects with keys matching columns
names.
Unless you specify the `columns` read option, the `row` argument will be
provided as an array, otherwise it will be provided as an object with keys
matching columns names.
In synchronous mode, the contract is quite simple, you receive an array
of fields for each record and return the transformed record.
In synchronous mode, the contract is quite simple, you will receive an array
of fields for each record and the transformed array should be returned.

@@ -52,4 +52,4 @@ In asynchronous mode, it is your responsibility to call the callback

.to(console.log)
.transform(function(data, index){
return data.reverse()
.transform(function(row, index){
return row.reverse()
});

@@ -64,5 +64,5 @@ // Executing `node samples/transform.js`, print:

.to(console.log)
.transform(function(data, index, callback){
.transform(function(row, index, callback){
process.nextTick(function(){
callback(null, data.reverse());
callback(null, row.reverse());
});

@@ -78,4 +78,4 @@ });

.to(console.log)
.transform(function(data, index){
return (index>0 ? ',' : '') + data[0] + ":" + data[2] + ' ' + data[1];
.transform(function(row, index){
return (index>0 ? ',' : '') + row[0] + ":" + row[2] + ' ' + row[1];
});

@@ -96,7 +96,6 @@ // Executing `node samples/transform.js`, print:

`transformer(csv).headers()`
`headers()`
----------------------------
Call a callback to transform a line. Called from the `parse` function on each
line. It is responsible for transforming the data and finally calling `write`.
Print headers.
*/

@@ -124,11 +123,11 @@

`transformer(csv).transform(line)`
`write(line)`
----------------------------------
Call a callback to transform a line. Called from the `parse` function on each
line. It is responsible for transforming the data and finally calling `write`.
Call a callback to transform a line. Called for each line after being parsed.
It is responsible for transforming the data and finally calling `write`.
*/
Transformer.prototype.transform = function(line) {
Transformer.prototype.write = function(line) {
var column, columns, csv, done, finish, i, lineAsObject, self, sync, _i, _j, _len, _len1;

@@ -213,3 +212,3 @@ self = this;

/* no doc
`transformer(csv).end()`
`end()`
------------------------

@@ -216,0 +215,0 @@

@@ -1,2 +0,2 @@

// Generated by CoffeeScript 1.3.3
// Generated by CoffeeScript 1.4.0

@@ -3,0 +3,0 @@ module.exports = {

{
"name": "csv",
"version": "0.2.6",
"version": "0.2.7",
"description": "CSV parser with simple api, full of options and tested against large datasets.",
"homepage": "http://www.adaltas.com/projects/node-csv/",
"bugs": "https://github.com/wdavidw/node-csv-parser/issues",
"author": "David Worms <david@adaltas.com>",

@@ -40,4 +41,3 @@ "contributors": [

"each": "latest",
"mecano": "latest",
"glob": "latest"
"mecano": "latest"
},

@@ -44,0 +44,0 @@ "dependencies": {},

@@ -13,18 +13,15 @@ [![Build Status](https://secure.travis-ci.org/wdavidw/node-csv-parser.png)](http://travis-ci.org/wdavidw/node-csv-parser)

This project provides CSV parsing and has been tested and used
on large input files. It provide every option you could expect from an
advanced CSV parser and stringifier.
[Documentation for the CSV parser is available here](http://www.adaltas.com/projects/node-csv/).
Important
---------
Usage
-----
This readme cover the current version 0.2.x of the node
csv parser.
Installation command is `npm install csv`.
The documentation for the current version 0.1.0 is
available [here](https://github.com/wdavidw/node-csv-parser/tree/v0.1).
### Quick example
Install command is `npm install csv`.
Quick example
-------------
```javascript

@@ -41,4 +38,3 @@ // node samples/string.js

Advanced example
----------------
### Advanced example

@@ -51,8 +47,8 @@ ```javascript

.to.path(__dirname+'/sample.out')
.transform( function(data){
data.unshift(data.pop());
return data;
.transform( function(row){
row.unshift(row.pop());
return row;
})
.on('record', function(data,index){
console.log('#'+index+' '+JSON.stringify(data));
.on('record', function(row,index){
console.log('#'+index+' '+JSON.stringify(row));
})

@@ -74,4 +70,8 @@ .on('end', function(count){

This readme cover the current version 0.2.x of the node
csv parser. The documentation for the current version 0.1.0 is
available [here](https://github.com/wdavidw/node-csv-parser/tree/v0.1).
The functions 'from*' and 'to*' are now rewritten as 'from.*' and 'to.*'. The 'data'
event is now the 'record' event. The 'data' now recieved a stringified version of
event is now the 'record' event. The 'data' now receives a stringified version of
the 'record' event.

@@ -116,2 +116,3 @@

* Douglas Christopher Wilson: <https://github.com/dougwilson>
* Chris Khoo: <https://github.com/khoomeister>

@@ -118,0 +119,0 @@ Related projects

@@ -6,11 +6,11 @@

.from.path(__dirname+'/columns.in', {
columns: true
columns: true
})
.to.stream(process.stdout, {
columns: ['id', 'name'],
end: false
columns: ['id', 'name'],
end: false
})
.transform(function(data){
data.name = data.firstname + ' ' + data.lastname
return data;
data.name = data.firstname + ' ' + data.lastname
return data;
});

@@ -17,0 +17,0 @@

@@ -15,19 +15,19 @@ //

csv()
parser = csv()
.from.stream(process.stdin)
.to.stream(process.stdout, {end: false})
.transform(function(data){
if (header) {
this.write(header);
} else {
header=data;
return null;
}
return data;
if (header) {
parser.write(header);
} else {
header=data;
return null;
}
return data;
})
.on('end',function(error){
process.stdout.write("\n");
process.stdout.write("\n");
})
.on('error',function(error){
console.log(error.message);
console.log(error.message);
});

@@ -34,0 +34,0 @@

@@ -6,10 +6,10 @@

.from.path(__dirname+'/columns.in',{
columns: true
columns: true
})
.to.stream(process.stdout, {
newColumns: true
newColumns: true
})
.transform(function(data){
data.name = data.firstname + ' ' + data.lastname
return data;
data.name = data.firstname + ' ' + data.lastname
return data;
})

@@ -16,0 +16,0 @@ .on('end', function(){

@@ -10,13 +10,13 @@

.transform(function(data){
data.unshift(data.pop());
return data;
data.unshift(data.pop());
return data;
})
.on('record',function(record, index){
console.log('#'+index+' '+JSON.stringify(record));
console.log('#'+index+' '+JSON.stringify(record));
})
.on('close',function(count){
console.log('Number of lines: '+count);
console.log('Number of lines: '+count);
})
.on('error',function(error){
console.log(error.message);
console.log(error.message);
});

@@ -23,0 +23,0 @@

@@ -8,13 +8,13 @@

.transform(function(data){
data.unshift(data.pop());
return data;
data.unshift(data.pop());
return data;
})
.on('record', function(data, index){
console.log('#'+index+' '+JSON.stringify(data));
console.log('#'+index+' '+JSON.stringify(data));
})
.on('end', function(count){
console.log('Number of lines: '+count);
console.log('Number of lines: '+count);
})
.on('error', function(error){
console.log(error.message);
console.log(error.message);
});

@@ -21,0 +21,0 @@

@@ -8,3 +8,3 @@

.transform(function(data,index){
return (index>0 ? ',' : '') + data[0] + ":" + data[2] + ' ' + data[1];
return (index>0 ? ',' : '') + data[0] + ":" + data[2] + ' ' + data[1];
});

@@ -11,0 +11,0 @@

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc