Comparing version 0.2.0 to 0.2.1
@@ -5,3 +5,3 @@ --- | ||
title: "Reading data from a source" | ||
date: 2012-10-02T15:35:24.009Z | ||
date: 2012-10-09T16:24:28.046Z | ||
comments: false | ||
@@ -11,11 +11,13 @@ sharing: false | ||
navigation: csv | ||
github: https://github.com/wdavidw/node-csv | ||
github: https://github.com/wdavidw/node-csv-parser | ||
--- | ||
The `csv().from` property provide functions to read to a csv input | ||
like a string, a file, a buffer or a readable stream. You may call | ||
the `from` function or one of its sub function. For example, here is | ||
to identical way to read from a file: | ||
The `csv().from` property provides functions to read from an external | ||
source and write to a CSV instance. The source may be a string, a file, | ||
a buffer or a readable stream. | ||
You may call the `from` function or one of its sub function. For example, | ||
here are two identical ways to read from a file: | ||
```javascript | ||
@@ -28,11 +30,14 @@ | ||
<a name="from"></a>`from(mixed)` | ||
<a name="from"></a> | ||
`from(mixed)` | ||
------------- | ||
Read from any sort of source. A convenient function to discover the source to parse. If it is a string, then if check | ||
if it match an existing file path and read the file content, otherwise, it | ||
treat the string as csv data. If it is an instance of stream, it consider the | ||
Read from any sort of source. It should be considered as a convenient function which | ||
will discover the nature of the data source to parse. | ||
If it is a string, then if check if it match an existing file path and read the file content, | ||
otherwise, it treat the string as csv data. If it is an instance of stream, it consider the | ||
object to be an input stream. If is an array, then for each line should correspond a record. | ||
Here's some examples on how to use this function | ||
Here's some examples on how to use this function: | ||
@@ -42,3 +47,3 @@ ```javascript | ||
csv() | ||
.from('"1","2","3","4","5"') | ||
.from('"1","2","3","4"\n"a","b","c","d"') | ||
.on('end', function(){ console.log('done') }) | ||
@@ -61,3 +66,4 @@ | ||
<a name="from.options"></a>`from.options([options])` | ||
<a name="from.options"></a> | ||
`from.options([options])` | ||
------------------------- | ||
@@ -79,12 +85,14 @@ | ||
<a name="from.array"></a>`from.array(data, [options])` | ||
<a name="from.array"></a> | ||
`from.array(data, [options])` | ||
------------------------------ | ||
Read from an array. Take an array as first argument and optionally | ||
some options as a second argument. Each element of the array | ||
represents a csv record. Those elements may be a string, a buffer, an | ||
some options as a second argument. Each element of the array | ||
represents a csv record. Those elements may be a string, a buffer, an | ||
array or an object. | ||
<a name="from.string"></a>`from.string(data, [options])` | ||
<a name="from.string"></a> | ||
`from.string(data, [options])` | ||
------------------------------- | ||
@@ -97,4 +105,13 @@ | ||
```javascript | ||
<a name="from.path"></a>`from.path(path, [options])` | ||
csv() | ||
.from( '"1","2","3","4"\n"a","b","c","d"' ) | ||
.to( function(data){} ) | ||
``` | ||
<a name="from.path"></a> | ||
`from.path(path, [options])` | ||
---------------------------- | ||
@@ -106,3 +123,4 @@ | ||
<a name="from.stream"></a>`from.stream(stream, [options])` | ||
<a name="from.stream"></a> | ||
`from.stream(stream, [options])` | ||
-------------------------------- | ||
@@ -109,0 +127,0 @@ |
128
doc/index.md
@@ -5,3 +5,3 @@ --- | ||
title: "Node CSV" | ||
date: 2012-10-02T15:35:24.008Z | ||
date: 2012-10-09T16:24:28.045Z | ||
comments: false | ||
@@ -11,3 +11,3 @@ sharing: false | ||
navigation: csv | ||
github: https://github.com/wdavidw/node-csv | ||
github: https://github.com/wdavidw/node-csv-parser | ||
--- | ||
@@ -22,6 +22,7 @@ | ||
* Support delimiters, quotes and escape characters | ||
* Line breaks discovery: line breaks in source are detected and reported to destination | ||
* Line breaks discovery: detected in source and reported to destination | ||
* Data transformation | ||
* Support for large datasets | ||
* Complete test coverage as sample and inspiration | ||
* no external dependencies | ||
@@ -35,3 +36,19 @@ Important, this documentation cover the current version of the node | ||
The following example illustrate 4 usages of the library: | ||
```javascript | ||
// node samples/string.js | ||
var csv = require('csv'); | ||
csv() | ||
.from( '"1","2","3","4"\n"a","b","c","d"' ) | ||
.to( console.log ) | ||
// Output: | ||
// 1,2,3,4 | ||
// a,b,c,d | ||
``` | ||
Advanced example | ||
---------------- | ||
The following example illustrates 4 usages of the library: | ||
1. Plug a readable stream by defining a file path | ||
@@ -46,3 +63,2 @@ 2. Direct output to a file path | ||
var csv = require('csv'); | ||
csv() | ||
@@ -64,4 +80,3 @@ .from.stream(fs.createReadStream(__dirname+'/sample.in') | ||
}); | ||
// Print sth like: | ||
// Output: | ||
// #0 ["2000-01-01","20322051544","1979.0","8.8017226E7","ABC","45"] | ||
@@ -76,4 +91,8 @@ // #1 ["2050-11-27","28392898392","1974.0","8.8392926E7","DEF","23"] | ||
The module follow a Stream architecture | ||
The module follow a Stream architecture. At it's core, the parser and | ||
the stringifier utilities provide a [Stream Writer][writable_stream] | ||
and a [Stream Reader][readable_stream] implementation available in the CSV API. | ||
```javascript | ||
|-----------| |---------|---------| |---------| | ||
@@ -89,6 +108,14 @@ | | | | | | | | ||
in = fs.createReadStream('./in') | ||
out = fs.createWriteStream('./out') | ||
in.pipe(csv()).pipe(out) | ||
``` | ||
Here's a quick example: | ||
```javascript | ||
in = fs.createReadStream('./in') | ||
out = fs.createWriteStream('./out') | ||
in.pipe(csv()).pipe(out) | ||
``` | ||
Installing | ||
@@ -110,41 +137,22 @@ ---------- | ||
By extending the Node `EventEmitter` class, the library provides | ||
a few useful events: | ||
The library extends Node [EventEmitter][event] class and emit all | ||
the events of the Writable and Readable [Stream API][stream]. Additionally, the useful "records" event | ||
is emitted. | ||
* *record* | ||
* *record* | ||
Emitted by the stringifier when a new row is parsed and transformed. The data is | ||
the value returned by the user `transform` callback if any. Note however that the event won't | ||
be called if transform return `null` since the record is skipped. | ||
The callback provides two arguments. `data` is the CSV line being processed (an array or an object) | ||
and `index` is the index number of the line starting at zero | ||
* *data* | ||
Emitted by the stringifier on each line once the data has been transformed and stringified. | ||
* *drain* | ||
* *end* | ||
Emitted when the CSV content has been parsed. | ||
* *close* | ||
Emitted when the underlying resource has been closed. For example, when writting to a file with `csv().to.path()`, the event will be called once the writing process is complete and the file closed. | ||
* *error* | ||
Thrown whenever an error occured. | ||
```javascript | ||
Emitted by the stringifier when a new row is parsed and transformed. The data is | ||
the value returned by the user `transform` callback if any. Note however that the event won't | ||
be called if transform return `null` since the record is skipped. | ||
The callback provides two arguments. `data` is the CSV line being processed (an array or an object) | ||
and `index` is the index number of the line starting at zero | ||
``` | ||
* *data* | ||
```javascript | ||
Emitted by the stringifier on each line once the data has been transformed and stringified. | ||
``` | ||
* *drain* | ||
* *end* | ||
```javascript | ||
Emitted when the CSV content has been parsed. | ||
``` | ||
* *close* | ||
```javascript | ||
Emitted when the underlying resource has been closed. For example, when writting to a file with `csv().to.path()`, the event will be called once the writing process is complete and the file closed. | ||
``` | ||
* *error* | ||
```javascript | ||
Thrown whenever an error occured. | ||
``` | ||
Columns | ||
@@ -189,3 +197,4 @@ ------- | ||
<a name="pause"></a>`pause()` | ||
<a name="pause"></a> | ||
`pause()` | ||
--------- | ||
@@ -197,10 +206,12 @@ | ||
<a name="resume"></a>`resume()` | ||
<a name="resume"></a> | ||
`resume()` | ||
---------- | ||
Implementation of the Readable Stream API, resuming the incoming 'data' | ||
events after a pause() | ||
events after a pause(). | ||
<a name="write"></a>`write(data, [preserve])` | ||
<a name="write"></a> | ||
`write(data, [preserve])` | ||
------------------------- | ||
@@ -216,3 +227,4 @@ | ||
<a name="end"></a>`end()` | ||
<a name="end"></a> | ||
`end()` | ||
------- | ||
@@ -225,3 +237,4 @@ | ||
<a name="transform"></a>`transform(callback)` | ||
<a name="transform"></a> | ||
`transform(callback)` | ||
--------------------- | ||
@@ -234,3 +247,4 @@ | ||
<a name="error"></a>`error(error)` | ||
<a name="error"></a> | ||
`error(error)` | ||
-------------- | ||
@@ -241,1 +255,5 @@ | ||
[event]: http://nodejs.org/api/events.html | ||
[stream]: http://nodejs.org/api/stream.html | ||
[writable_stream]: http://nodejs.org/api/stream.html#stream_writable_stream | ||
[readable_stream]: http://nodejs.org/api/stream.html#stream_readable_stream |
@@ -5,3 +5,3 @@ --- | ||
title: "Parsing" | ||
date: 2012-10-02T15:35:24.009Z | ||
date: 2012-10-09T16:24:28.047Z | ||
comments: false | ||
@@ -11,18 +11,17 @@ sharing: false | ||
navigation: csv | ||
github: https://github.com/wdavidw/node-csv | ||
github: https://github.com/wdavidw/node-csv-parser | ||
--- | ||
The library extend the EventEmitter and emit the following events: | ||
The library extend the [EventEmitter][event] and emit the following events: | ||
* *row* | ||
* *row* | ||
Emitted by the parser on each line with the line content as an array of fields. | ||
* *end* | ||
Emitted when no more data will be parsed. | ||
* *error* | ||
Emitted when an error occured. | ||
```javascript | ||
Emitted by the parser on each line with the line content as an array of fields. | ||
``` | ||
* *end* | ||
* *error* | ||
<a name="parse"></a>`parse(chars)` | ||
<a name="parse"></a> | ||
`parse(chars)` | ||
-------------- | ||
@@ -34,1 +33,2 @@ | ||
[event]: http://nodejs.org/api/events.html |
@@ -5,3 +5,3 @@ --- | ||
title: "Stringifier" | ||
date: 2012-10-02T15:35:24.009Z | ||
date: 2012-10-09T16:24:28.047Z | ||
comments: false | ||
@@ -11,3 +11,3 @@ sharing: false | ||
navigation: csv | ||
github: https://github.com/wdavidw/node-csv | ||
github: https://github.com/wdavidw/node-csv-parser | ||
--- | ||
@@ -14,0 +14,0 @@ |
@@ -5,3 +5,3 @@ --- | ||
title: "Writing data to a destination" | ||
date: 2012-10-02T15:35:24.009Z | ||
date: 2012-10-09T16:24:28.046Z | ||
comments: false | ||
@@ -11,11 +11,13 @@ sharing: false | ||
navigation: csv | ||
github: https://github.com/wdavidw/node-csv | ||
github: https://github.com/wdavidw/node-csv-parser | ||
--- | ||
The `csv().to` property provide convenient functions to write | ||
to a csv output like a stream or a file. You may call | ||
the `to` function or one of its sub function. For example, here is | ||
to identical way to read to a file: | ||
The `csv().to` property provides functions to read from a CSV instance and | ||
to write to an external destination. The destination may be a stream, a file | ||
or a callback. | ||
You may call the `to` function or one of its sub function. For example, | ||
here are two identical ways to write to a file: | ||
```javascript | ||
@@ -28,12 +30,15 @@ | ||
<a name="to"></a>`to(mixed)` | ||
<a name="to"></a> | ||
`to(mixed)` | ||
----------- | ||
Write from any sort of destination. A convenient function to discover the | ||
destination. If is an function, then the csv will be provided as the first | ||
argument of the callback. If it is a string, then it is expected to be a | ||
Write from any sort of destination. It should be considered as a convenient function | ||
which will discover the nature of the destination where to write the CSV data. | ||
If is an function, then the csv will be provided as the first argument | ||
of the callback. If it is a string, then it is expected to be a | ||
file path. If it is an instance of stream, it consider the object to be an | ||
output stream. | ||
Here's some examples on how to use this function | ||
Here's some examples on how to use this function: | ||
@@ -57,3 +62,4 @@ ```javascript | ||
<a name="to.options"></a>`to.options([options])` | ||
<a name="to.options"></a> | ||
`to.options([options])` | ||
----------------------- | ||
@@ -72,7 +78,10 @@ | ||
* `flags` Defaults to 'w', 'w' to create or overwrite an file, 'a' to append to a file. Applied when using the `toPath` method. | ||
* `newColumns` If the `columns` option is not specified (which means columns will be taken from the reader options, will automatically append new columns if they are added during <a name="transform"></a>`transform()`. | ||
* `end` Prevent calling `end` on the destination, so that destination is no longer writable, similar to passing `{end: false}` option in <a name="stream.pipe"></a>`stream.pipe()`. | ||
* `newColumns` If the `columns` option is not specified (which means columns will be taken from the reader options, will automatically append new columns if they are added during <a name="transform"></a> | ||
`transform()`. | ||
* `end` Prevent calling `end` on the destination, so that destination is no longer writable, similar to passing `{end: false}` option in <a name="stream.pipe"></a> | ||
`stream.pipe()`. | ||
<a name="to.string"></a>`to.string(callback, [options])` | ||
<a name="to.string"></a> | ||
`to.string(callback, [options])` | ||
------------------------------ | ||
@@ -85,6 +94,4 @@ | ||
csv() | ||
.from(input) | ||
.to(function(ouput){ | ||
console.log(ouput); | ||
}, options) | ||
.from( '"1","2","3","4"\n"a","b","c","d"' ) | ||
.to( function(data){} ) | ||
``` | ||
@@ -94,3 +101,4 @@ | ||
<a name="to.stream"></a>`to.stream(stream, [options])` | ||
<a name="to.stream"></a> | ||
`to.stream(stream, [options])` | ||
------------------------------ | ||
@@ -102,3 +110,4 @@ | ||
<a name="to.path"></a>`to.path(path, [options])` | ||
<a name="to.path"></a> | ||
`to.path(path, [options])` | ||
-------------------------- | ||
@@ -105,0 +114,0 @@ |
@@ -5,3 +5,3 @@ --- | ||
title: "Transforming data" | ||
date: 2012-10-02T15:35:24.009Z | ||
date: 2012-10-18T13:11:44.066Z | ||
comments: false | ||
@@ -11,11 +11,18 @@ sharing: false | ||
navigation: csv | ||
github: https://github.com/wdavidw/node-csv | ||
github: https://github.com/wdavidw/node-csv-parser | ||
--- | ||
The contract is quite simple, you receive an array of fields for | ||
each record and return the transformed record. The return value | ||
may be an array, an associative array, a string or null. If null, | ||
the record will simply be skipped. | ||
Transformation may occur synchronously or asynchronously dependending | ||
on the provided transform callback and its declared arguments length. | ||
Callback are called for each line and its arguments are : | ||
* *data* | ||
CSV record | ||
* *index* | ||
Incremented counter | ||
* *callback* | ||
Callback function to be called in asynchronous mode | ||
Unless you specify the `columns` read option, `data` are provided | ||
@@ -25,4 +32,13 @@ as arrays, otherwise they are objects with keys matching columns | ||
When the returned value is an array, the fields are merged in | ||
order. When the returned value is an object, it will search for | ||
In synchronous mode, the contract is quite simple, you receive an array | ||
of fields for each record and return the transformed record. | ||
In asynchronous mode, it is your responsibility to call the callback | ||
provided as the third argument. It must be called with two arguments, | ||
the first one is an error if any, the second is the transformed record. | ||
Transformed records may be an array, an associative array, a | ||
string or null. If null, the record will simply be skipped. When the | ||
returned value is an array, the fields are merged in order. | ||
When the returned value is an object, it will search for | ||
the `columns` property in the write or in the read options and | ||
@@ -35,23 +51,47 @@ smartly order the values. If no `columns` options are found, | ||
Example of transform returning a string | ||
Transform callback run synchronously: | ||
```javascript | ||
// node samples/transform.js | ||
var csv = require('csv'); | ||
csv() | ||
.from.path(__dirname+'/transform.in') | ||
.to.stream(process.stdout) | ||
.from('82,Preisner,Zbigniew\n94,Gainsbourg,Serge') | ||
.to(console.log) | ||
.transform(function(data, index){ | ||
return (index>0 ? ',' : '') + data[0] + ":" + data[2] + ' ' + data[1]; | ||
return data.reverse() | ||
}); | ||
// Executing `node samples/transform.js`, print: | ||
// 94,Gainsbourg,Serge\n82,Preisner,Zbigniew | ||
// Print sth like: | ||
``` | ||
Transform callback run asynchronously: | ||
```javascript | ||
csv() | ||
.from('82,Preisner,Zbigniew\n94,Gainsbourg,Serge') | ||
.to(console.log) | ||
.transform(function(data, index, callback){ | ||
process.nextTick(function(){ | ||
callback(null, data.reverse()); | ||
}); | ||
}); | ||
// Executing `node samples/transform.js`, print: | ||
// 94,Gainsbourg,Serge\n82,Preisner,Zbigniew | ||
``` | ||
Transform callback returning a string: | ||
```javascript | ||
csv() | ||
.from('82,Preisner,Zbigniew\n94,Gainsbourg,Serge') | ||
.to(console.log) | ||
.transform(function(data, index){ | ||
return (index>0 ? ',' : '') + data[0] + ":" + data[2] + ' ' + data[1]; | ||
}); | ||
// Executing `node samples/transform.js`, print: | ||
// 82:Zbigniew Preisner,94:Serge Gainsbourg | ||
``` | ||
<a name="transform"></a>`transform(line)` | ||
----------------- | ||
Call a callback to transform a line. Used by the `parse` function on each | ||
line. It is responsible for transforming the data and finally calling `write`. | ||
157
lib/csv.js
@@ -13,6 +13,7 @@ // Generated by CoffeeScript 1.3.3 | ||
* Support delimiters, quotes and escape characters | ||
* Line breaks discovery: line breaks in source are detected and reported to destination | ||
* Line breaks discovery: detected in source and reported to destination | ||
* Data transformation | ||
* Support for large datasets | ||
* Complete test coverage as sample and inspiration | ||
* no external dependencies | ||
@@ -26,3 +27,15 @@ Important, this documentation cover the current version of the node | ||
The following example illustrate 4 usages of the library: | ||
// node samples/string.js | ||
var csv = require('csv'); | ||
csv() | ||
.from( '"1","2","3","4"\n"a","b","c","d"' ) | ||
.to( console.log ) | ||
// Output: | ||
// 1,2,3,4 | ||
// a,b,c,d | ||
Advanced example | ||
---------------- | ||
The following example illustrates 4 usages of the library: | ||
1. Plug a readable stream by defining a file path | ||
@@ -35,3 +48,2 @@ 2. Direct output to a file path | ||
var csv = require('csv'); | ||
csv() | ||
@@ -53,4 +65,3 @@ .from.stream(fs.createReadStream(__dirname+'/sample.in') | ||
}); | ||
// Print sth like: | ||
// Output: | ||
// #0 ["2000-01-01","20322051544","1979.0","8.8017226E7","ABC","45"] | ||
@@ -63,18 +74,22 @@ // #1 ["2050-11-27","28392898392","1974.0","8.8392926E7","DEF","23"] | ||
The module follow a Stream architecture | ||
The module follow a Stream architecture. At it's core, the parser and | ||
the stringifier utilities provide a [Stream Writer][writable_stream] | ||
and a [Stream Reader][readable_stream] implementation available in the CSV API. | ||
|-----------| |---------|---------| |---------| | ||
| | | | | | | | ||
| | | CSV | | | | ||
| | | | | | | | ||
| Stream | | Writer | Reader | | Stream | | ||
| Reader |.pipe(| API | API |).pipe(| Writer |) | ||
| | | | | | | | ||
| | | | | | | | ||
|-----------| |---------|---------| |---------| | ||
|-----------| |---------|---------| |---------| | ||
| | | | | | | | ||
| | | CSV | | | | ||
| | | | | | | | ||
| Stream | | Writer | Reader | | Stream | | ||
| Reader |.pipe(| API | API |).pipe(| Writer |) | ||
| | | | | | | | ||
| | | | | | | | ||
|-----------| |---------|---------| |---------| | ||
in = fs.createReadStream('./in') | ||
out = fs.createWriteStream('./out') | ||
in.pipe(csv()).pipe(out) | ||
Here's a quick example: | ||
in = fs.createReadStream('./in') | ||
out = fs.createWriteStream('./out') | ||
in.pipe(csv()).pipe(out) | ||
Installing | ||
@@ -96,20 +111,21 @@ ---------- | ||
By extending the Node `EventEmitter` class, the library provides | ||
a few useful events: | ||
The library extends Node [EventEmitter][event] class and emit all | ||
the events of the Writable and Readable [Stream API][stream]. Additionally, the useful "records" event | ||
is emitted. | ||
* *record* | ||
Emitted by the stringifier when a new row is parsed and transformed. The data is | ||
the value returned by the user `transform` callback if any. Note however that the event won't | ||
be called if transform return `null` since the record is skipped. | ||
The callback provides two arguments. `data` is the CSV line being processed (an array or an object) | ||
and `index` is the index number of the line starting at zero | ||
* *data* | ||
Emitted by the stringifier on each line once the data has been transformed and stringified. | ||
* *drain* | ||
* *end* | ||
Emitted when the CSV content has been parsed. | ||
* *close* | ||
Emitted when the underlying resource has been closed. For example, when writting to a file with `csv().to.path()`, the event will be called once the writing process is complete and the file closed. | ||
* *error* | ||
Thrown whenever an error occured. | ||
* *record* | ||
Emitted by the stringifier when a new row is parsed and transformed. The data is | ||
the value returned by the user `transform` callback if any. Note however that the event won't | ||
be called if transform return `null` since the record is skipped. | ||
The callback provides two arguments. `data` is the CSV line being processed (an array or an object) | ||
and `index` is the index number of the line starting at zero | ||
* *data* | ||
Emitted by the stringifier on each line once the data has been transformed and stringified. | ||
* *drain* | ||
* *end* | ||
Emitted when the CSV content has been parsed. | ||
* *close* | ||
Emitted when the underlying resource has been closed. For example, when writting to a file with `csv().to.path()`, the event will be called once the writing process is complete and the file closed. | ||
* *error* | ||
Thrown whenever an error occured. | ||
@@ -119,6 +135,11 @@ Columns | ||
Columns names may be provided or discovered in the first line with | ||
the read options `columns`. If defined as an array, the order must | ||
match the one of the input source. If set to `true`, the fields are | ||
expected to be present in the first line of the input source. | ||
Columns are defined in the `csv.options.from` and `csv.options.to`. | ||
Columns names may be provided or discovered in the first line with the | ||
read options `columns`. Most user will defined columns as an | ||
array of property names. If defined as an array, the order must match | ||
the one of the input source. If set to `true`, the fields are | ||
expected to be present in the first line of the input source. For greater | ||
flexibility in parallel with the `csv.options.to.header` option, | ||
it is possible to defined the "columns" options as an object where keys | ||
are the property names and values are the display name. | ||
@@ -151,2 +172,29 @@ You can define a different order and even different columns in the | ||
// 94,Serge Gainsbourg | ||
Columns as a true: | ||
var data = 'field1,field2\nval1,val2'; | ||
csv() | ||
.from(data, {columns: true}) | ||
.to(function(data){ | ||
data.should.eql('val1,val3'); | ||
}); | ||
Columns as an array: | ||
var data = 'field1,field2,field3\nval1,val2,val3'; | ||
csv() | ||
.from(data, {columns: true}) | ||
.to(function(data){ | ||
data.should.eql('val1,val3'); | ||
}, {columns: ['field1', 'field3']}); | ||
Columns as an object with header option: | ||
var data = 'field1,field2,field3\nval1,val2,val3'; | ||
csv() | ||
.from(data, {columns: true}) | ||
.to(function(data){ | ||
data.should.eql('column1,column3\nval1,val3'); | ||
}, {columns: {field1: 'column1', field3: 'column3'}, header: true}); | ||
*/ | ||
@@ -173,2 +221,3 @@ | ||
CSV = function() { | ||
this.paused = false; | ||
this.readable = true; | ||
@@ -185,4 +234,3 @@ this.writable = true; | ||
this.parser.on('end', (function() { | ||
this.emit('end', this.state.count); | ||
return this.readable = false; | ||
return this.transformer.end(); | ||
}).bind(this)); | ||
@@ -194,2 +242,5 @@ this.parser.on('error', (function(e) { | ||
this.transformer = transformer(this); | ||
this.transformer.on('end', (function() { | ||
return this.emit('end', this.state.count); | ||
}).bind(this)); | ||
return this; | ||
@@ -220,3 +271,3 @@ }; | ||
Implementation of the Readable Stream API, resuming the incoming 'data' | ||
events after a pause() | ||
events after a pause(). | ||
*/ | ||
@@ -245,2 +296,3 @@ | ||
CSV.prototype.write = function(data, preserve) { | ||
var csv; | ||
if (!this.writable) { | ||
@@ -252,11 +304,10 @@ return false; | ||
} else if (Array.isArray(data) && !this.state.transforming) { | ||
csv = this; | ||
this.transformer.transform(data); | ||
} else { | ||
if (this.state.count === 0 && this.options.to.header === true) { | ||
this.stringifier.write(this.options.to.columns || this.options.from.columns); | ||
if (preserve || this.state.transforming) { | ||
this.stringifier.write(data, preserve); | ||
} else { | ||
this.transformer.transform(data); | ||
} | ||
this.stringifier.write(data, preserve); | ||
if (!this.state.transforming && !preserve) { | ||
this.state.count++; | ||
} | ||
} | ||
@@ -281,2 +332,4 @@ return !this.paused; | ||
} | ||
this.readable = false; | ||
this.writable = false; | ||
return this.parser.end(); | ||
@@ -324,1 +377,9 @@ }; | ||
}; | ||
/* | ||
[event]: http://nodejs.org/api/events.html | ||
[stream]: http://nodejs.org/api/stream.html | ||
[writable_stream]: http://nodejs.org/api/stream.html#stream_writable_stream | ||
[readable_stream]: http://nodejs.org/api/stream.html#stream_readable_stream | ||
*/ | ||
@@ -60,3 +60,3 @@ // Generated by CoffeeScript 1.3.3 | ||
return text.replace(re_anchor, function(str, code) { | ||
return "<a name=\"" + code + "\"></a>`" + code + "("; | ||
return "<a name=\"" + code + "\"></a>\n`" + code + "("; | ||
}); | ||
@@ -87,6 +87,6 @@ }; | ||
} | ||
re = /###\n([\s\S]*?)\n( *)###/g; | ||
re = /###(.*)\n([\s\S]*?)\n( *)###/g; | ||
re_title = /([\s\S]+)\n={2}=+([\s\S]*)/g; | ||
match = re.exec(text); | ||
match = re_title.exec(match[1]); | ||
match = re_title.exec(match[2]); | ||
title = match[1].trim(); | ||
@@ -96,6 +96,9 @@ content = match[2]; | ||
content = convert_code(content); | ||
docs = "---\nlanguage: en\nlayout: page\ntitle: \"" + title + "\"\ndate: " + (date()) + "\ncomments: false\nsharing: false\nfooter: false\nnavigation: csv\ngithub: https://github.com/wdavidw/node-csv\n---\n" + content; | ||
docs = "---\nlanguage: en\nlayout: page\ntitle: \"" + title + "\"\ndate: " + (date()) + "\ncomments: false\nsharing: false\nfooter: false\nnavigation: csv\ngithub: https://github.com/wdavidw/node-csv-parser\n---\n" + content; | ||
while (match = re.exec(text)) { | ||
match[1] = unindent(match[1]); | ||
docs += convert_code(convert_anchor(match[1])); | ||
if (match[1]) { | ||
continue; | ||
} | ||
match[2] = unindent(match[2]); | ||
docs += convert_code(convert_anchor(match[2])); | ||
docs += '\n'; | ||
@@ -102,0 +105,0 @@ } |
// Generated by CoffeeScript 1.3.3 | ||
var Stream, fs, utils; | ||
var Stream, fs, path, utils, _ref; | ||
fs = require('fs'); | ||
path = require('path'); | ||
if ((_ref = fs.exists) == null) { | ||
fs.exists = path.exists; | ||
} | ||
utils = require('./utils'); | ||
@@ -15,7 +21,9 @@ | ||
The `csv().from` property provide functions to read to a csv input | ||
like a string, a file, a buffer or a readable stream. You may call | ||
the `from` function or one of its sub function. For example, here is | ||
to identical way to read from a file: | ||
The `csv().from` property provides functions to read from an external | ||
source and write to a CSV instance. The source may be a string, a file, | ||
a buffer or a readable stream. | ||
You may call the `from` function or one of its sub function. For example, | ||
here are two identical ways to read from a file: | ||
csv.from('/tmp/data.csv').on('data', console.log); | ||
@@ -32,11 +40,13 @@ csv.from.path('/tmp/data.csv').on('data', console.log); | ||
Read from any sort of source. A convenient function to discover the source to parse. If it is a string, then if check | ||
if it match an existing file path and read the file content, otherwise, it | ||
treat the string as csv data. If it is an instance of stream, it consider the | ||
Read from any sort of source. It should be considered as a convenient function which | ||
will discover the nature of the data source to parse. | ||
If it is a string, then if check if it match an existing file path and read the file content, | ||
otherwise, it treat the string as csv data. If it is an instance of stream, it consider the | ||
object to be an input stream. If is an array, then for each line should correspond a record. | ||
Here's some examples on how to use this function | ||
Here's some examples on how to use this function: | ||
csv() | ||
.from('"1","2","3","4","5"') | ||
.from('"1","2","3","4"\n"a","b","c","d"') | ||
.on('end', function(){ console.log('done') }) | ||
@@ -58,3 +68,3 @@ | ||
var from; | ||
from = function(mixed) { | ||
from = function(mixed, options) { | ||
var error; | ||
@@ -66,5 +76,5 @@ error = false; | ||
if (exists) { | ||
return from.path(mixed); | ||
return from.path(mixed, options); | ||
} else { | ||
return from.string(mixed); | ||
return from.string(mixed, options); | ||
} | ||
@@ -75,6 +85,6 @@ }); | ||
if (Array.isArray(mixed)) { | ||
from.array(mixed); | ||
from.array(mixed, options); | ||
} else { | ||
if (mixed instanceof Stream) { | ||
from.stream(mixed); | ||
from.stream(mixed, options); | ||
} else { | ||
@@ -126,4 +136,4 @@ error = true; | ||
Read from an array. Take an array as first argument and optionally | ||
some options as a second argument. Each element of the array | ||
represents a csv record. Those elements may be a string, a buffer, an | ||
some options as a second argument. Each element of the array | ||
represents a csv record. Those elements may be a string, a buffer, an | ||
array or an object. | ||
@@ -135,5 +145,6 @@ */ | ||
process.nextTick(function() { | ||
var i, _i, _ref; | ||
for (i = _i = 0, _ref = data.length; 0 <= _ref ? _i < _ref : _i > _ref; i = 0 <= _ref ? ++_i : --_i) { | ||
csv.write(data[i]); | ||
var record, _i, _len; | ||
for (_i = 0, _len = data.length; _i < _len; _i++) { | ||
record = data[_i]; | ||
csv.write(record); | ||
} | ||
@@ -153,2 +164,6 @@ return csv.end(); | ||
CSV is large. | ||
csv() | ||
.from( '"1","2","3","4"\n"a","b","c","d"' ) | ||
.to( function(data){} ) | ||
*/ | ||
@@ -190,5 +205,15 @@ | ||
from.stream = function(stream, options) { | ||
var first; | ||
this.options(options); | ||
first = true; | ||
stream.on('data', function(data) { | ||
return csv.write(data.toString()); | ||
var string, strip; | ||
if (csv.writable) { | ||
strip = first && typeof data === 'string' && stream.encoding === 'utf8' && 0xFEFF === data.charCodeAt(0); | ||
string = strip ? data.substring(1) : data.toString(); | ||
if (false === csv.write(string)) { | ||
stream.pause(); | ||
} | ||
} | ||
return first = false; | ||
}); | ||
@@ -201,2 +226,7 @@ stream.on('error', function(e) { | ||
}); | ||
csv.on('drain', function() { | ||
if (stream.readable) { | ||
return stream.resume(); | ||
} | ||
}); | ||
csv.readStream = stream; | ||
@@ -203,0 +233,0 @@ return csv; |
@@ -13,8 +13,10 @@ // Generated by CoffeeScript 1.3.3 | ||
The library extend the EventEmitter and emit the following events: | ||
The library extend the [EventEmitter][event] and emit the following events: | ||
* *row* | ||
Emitted by the parser on each line with the line content as an array of fields. | ||
* *end* | ||
* *error* | ||
* *row* | ||
Emitted by the parser on each line with the line content as an array of fields. | ||
* *end* | ||
Emitted when no more data will be parsed. | ||
* *error* | ||
Emitted when an error occured. | ||
*/ | ||
@@ -24,3 +26,2 @@ | ||
Parser = function(csv) { | ||
this.writable = true; | ||
this.csv = csv; | ||
@@ -31,2 +32,3 @@ this.options = csv.options.from; | ||
this.commented = false; | ||
this.lines = 0; | ||
return this; | ||
@@ -50,5 +52,2 @@ }; | ||
var c, csv, escapeIsQuote, i, isEscape, isQuote, isReallyEscaped, l, nextChar; | ||
if (!this.writable) { | ||
return this.error(new Error('Parser is not writable')); | ||
} | ||
csv = this.csv; | ||
@@ -87,3 +86,3 @@ chars = '' + chars; | ||
if (nextChar && nextChar !== '\r' && nextChar !== '\n' && nextChar !== this.options.delimiter) { | ||
return this.error(new Error('Invalid closing quote; found ' + JSON.stringify(nextChar) + ' instead of delimiter ' + JSON.stringify(this.options.delimiter))); | ||
return this.error(new Error("Invalid closing quote at line " + (this.lines + 1) + "; found " + (JSON.stringify(nextChar)) + " instead of delimiter " + (JSON.stringify(this.options.delimiter)))); | ||
} | ||
@@ -119,2 +118,3 @@ this.quoted = false; | ||
} | ||
this.lines++; | ||
if (csv.options.to.lineBreaks === null) { | ||
@@ -151,3 +151,3 @@ csv.options.to.lineBreaks = c + (c === '\r' && chars.charAt(i + 1) === '\n' ? '\n' : ''); | ||
if (this.quoted) { | ||
return this.error(new Error('Quoted field not terminated')); | ||
return this.error(new Error("Quoted field not terminated at line " + (this.lines + 1))); | ||
} | ||
@@ -168,3 +168,2 @@ if (this.state.field || this.state.lastC === this.options.delimiter || this.state.lastC === this.options.quote) { | ||
Parser.prototype.error = function(e) { | ||
this.writable = false; | ||
return this.emit('error', e); | ||
@@ -178,1 +177,6 @@ }; | ||
module.exports.Parser = Parser; | ||
/* | ||
[event]: http://nodejs.org/api/events.html | ||
*/ | ||
@@ -10,4 +10,4 @@ // Generated by CoffeeScript 1.3.3 | ||
countWriten: 0, | ||
transforming: false | ||
transforming: 0 | ||
}; | ||
}; |
@@ -29,3 +29,3 @@ // Generated by CoffeeScript 1.3.3 | ||
try { | ||
this.csv.emit('record', line, this.csv.state.count); | ||
this.csv.emit('record', line, this.csv.state.count - 1); | ||
} catch (e) { | ||
@@ -46,2 +46,5 @@ return this.csv.error(e); | ||
columns = this.csv.options.to.columns || this.csv.options.from.columns; | ||
if (typeof columns === 'object' && columns !== null && !Array.isArray(columns)) { | ||
columns = Object.keys(columns); | ||
} | ||
delimiter = this.csv.options.to.delimiter || this.csv.options.from.delimiter; | ||
@@ -65,4 +68,4 @@ quote = this.csv.options.to.quote || this.csv.options.from.quote; | ||
_line = null; | ||
} else if (this.csv.options.to.columns) { | ||
line.splice(this.csv.options.to.columns.length); | ||
} else if (columns) { | ||
line.splice(columns.length); | ||
} | ||
@@ -69,0 +72,0 @@ if (Array.isArray(line)) { |
@@ -15,7 +15,9 @@ // Generated by CoffeeScript 1.3.3 | ||
The `csv().to` property provide convenient functions to write | ||
to a csv output like a stream or a file. You may call | ||
the `to` function or one of its sub function. For example, here is | ||
to identical way to read to a file: | ||
The `csv().to` property provides functions to read from a CSV instance and | ||
to write to an external destination. The destination may be a stream, a file | ||
or a callback. | ||
You may call the `to` function or one of its sub function. For example, | ||
here are two identical ways to write to a file: | ||
csv.from(data).to('/tmp/data.csv'); | ||
@@ -32,9 +34,11 @@ csv.from(data).to.path('/tmp/data.csv'); | ||
Write from any sort of destination. A convenient function to discover the | ||
destination. If is an function, then the csv will be provided as the first | ||
argument of the callback. If it is a string, then it is expected to be a | ||
Write from any sort of destination. It should be considered as a convenient function | ||
which will discover the nature of the destination where to write the CSV data. | ||
If is an function, then the csv will be provided as the first argument | ||
of the callback. If it is a string, then it is expected to be a | ||
file path. If it is an instance of stream, it consider the object to be an | ||
output stream. | ||
Here's some examples on how to use this function | ||
Here's some examples on how to use this function: | ||
@@ -55,3 +59,3 @@ csv() | ||
var to; | ||
to = function(mixed) { | ||
to = function(mixed, options) { | ||
var error; | ||
@@ -61,7 +65,7 @@ error = false; | ||
case 'string': | ||
to.path(mixed); | ||
to.path(mixed, options); | ||
break; | ||
case 'object': | ||
if (mixed instanceof Stream) { | ||
to.stream(mixed); | ||
to.stream(mixed, options); | ||
} else { | ||
@@ -72,3 +76,3 @@ error = true; | ||
case 'function': | ||
to.string(mixed); | ||
to.string(mixed, options); | ||
break; | ||
@@ -119,6 +123,8 @@ default: | ||
csv() | ||
.from(input) | ||
.to(function(ouput){ | ||
console.log(ouput); | ||
}, options) | ||
.from( '"1","2","3","4"\n"a","b","c","d"' ) | ||
.to( function(data, count){} ) | ||
Callback is called with 2 arguments: | ||
* data Stringify CSV string | ||
* count Number of stringified records | ||
*/ | ||
@@ -137,3 +143,3 @@ | ||
stream.end = function() { | ||
return callback(data); | ||
return callback(data, csv.state.countWriten); | ||
}; | ||
@@ -140,0 +146,0 @@ csv.pipe(stream); |
// Generated by CoffeeScript 1.3.3 | ||
var Transformer, stream; | ||
stream = require('stream'); | ||
/* | ||
@@ -6,7 +10,14 @@ Transforming data | ||
The contract is quite simple, you receive an array of fields for | ||
each record and return the transformed record. The return value | ||
may be an array, an associative array, a string or null. If null, | ||
the record will simply be skipped. | ||
Transformation may occur synchronously or asynchronously dependending | ||
on the provided transform callback and its declared arguments length. | ||
Callback are called for each line and its arguments are : | ||
* *data* | ||
CSV record | ||
* *index* | ||
Incremented counter | ||
* *callback* | ||
Callback function to be called in asynchronous mode | ||
Unless you specify the `columns` read option, `data` are provided | ||
@@ -16,4 +27,13 @@ as arrays, otherwise they are objects with keys matching columns | ||
When the returned value is an array, the fields are merged in | ||
order. When the returned value is an object, it will search for | ||
In synchronous mode, the contract is quite simple, you receive an array | ||
of fields for each record and return the transformed record. | ||
In asynchronous mode, it is your responsibility to call the callback | ||
provided as the third argument. It must be called with two arguments, | ||
the first one is an error if any, the second is the transformed record. | ||
Transformed records may be an array, an associative array, a | ||
string or null. If null, the record will simply be skipped. When the | ||
returned value is an array, the fields are merged in order. | ||
When the returned value is an object, it will search for | ||
the `columns` property in the write or in the read options and | ||
@@ -26,21 +46,38 @@ smartly order the values. If no `columns` options are found, | ||
Example of transform returning a string | ||
Transform callback run synchronously: | ||
```javascript | ||
// node samples/transform.js | ||
var csv = require('csv'); | ||
csv() | ||
.from('82,Preisner,Zbigniew\n94,Gainsbourg,Serge') | ||
.to(console.log) | ||
.transform(function(data, index){ | ||
return data.reverse() | ||
}); | ||
// Executing `node samples/transform.js`, print: | ||
// 94,Gainsbourg,Serge\n82,Preisner,Zbigniew | ||
csv() | ||
.from.path(__dirname+'/transform.in') | ||
.to.stream(process.stdout) | ||
.transform(function(data, index){ | ||
return (index>0 ? ',' : '') + data[0] + ":" + data[2] + ' ' + data[1]; | ||
}); | ||
Transform callback run asynchronously: | ||
// Print sth like: | ||
// 82:Zbigniew Preisner,94:Serge Gainsbourg | ||
``` | ||
csv() | ||
.from('82,Preisner,Zbigniew\n94,Gainsbourg,Serge') | ||
.to(console.log) | ||
.transform(function(data, index, callback){ | ||
process.nextTick(function(){ | ||
callback(null, data.reverse()); | ||
}); | ||
}); | ||
// Executing `node samples/transform.js`, print: | ||
// 94,Gainsbourg,Serge\n82,Preisner,Zbigniew | ||
Transform callback returning a string: | ||
csv() | ||
.from('82,Preisner,Zbigniew\n94,Gainsbourg,Serge') | ||
.to(console.log) | ||
.transform(function(data, index){ | ||
return (index>0 ? ',' : '') + data[0] + ":" + data[2] + ' ' + data[1]; | ||
}); | ||
// Executing `node samples/transform.js`, print: | ||
// 82:Zbigniew Preisner,94:Serge Gainsbourg | ||
*/ | ||
var Transformer; | ||
@@ -52,8 +89,10 @@ Transformer = function(csv) { | ||
/* | ||
Transformer.prototype.__proto__ = stream.prototype; | ||
`transform(line)` | ||
----------------- | ||
/* no doc | ||
Call a callback to transform a line. Used by the `parse` function on each | ||
`transformer(csv).transform(line)` | ||
---------------------------------- | ||
Call a callback to transform a line. Called from the `parse` function on each | ||
line. It is responsible for transforming the data and finally calling `write`. | ||
@@ -64,38 +103,106 @@ */ | ||
Transformer.prototype.transform = function(line) { | ||
var column, columns, i, isObject, lineAsObject, _i, _len; | ||
columns = this.csv.options.from.columns; | ||
var column, columns, csv, done, finish, i, lineAsObject, sync, _i, _j, _len, _len1; | ||
csv = this.csv; | ||
columns = csv.options.from.columns; | ||
if (columns) { | ||
if (this.csv.state.count === 0 && columns === true) { | ||
this.csv.options.from.columns = columns = line; | ||
if (typeof columns === 'object' && columns !== null && !Array.isArray(columns)) { | ||
columns = Object.keys(columns); | ||
} | ||
if (csv.state.count === 0 && columns === true) { | ||
csv.options.from.columns = line; | ||
return; | ||
} | ||
lineAsObject = {}; | ||
for (i = _i = 0, _len = columns.length; _i < _len; i = ++_i) { | ||
column = columns[i]; | ||
lineAsObject[column] = line[i] || null; | ||
if (Array.isArray(line)) { | ||
lineAsObject = {}; | ||
for (i = _i = 0, _len = columns.length; _i < _len; i = ++_i) { | ||
column = columns[i]; | ||
lineAsObject[column] = line[i] || null; | ||
} | ||
line = lineAsObject; | ||
} else { | ||
lineAsObject = {}; | ||
for (i = _j = 0, _len1 = columns.length; _j < _len1; i = ++_j) { | ||
column = columns[i]; | ||
lineAsObject[column] = line[column] || null; | ||
} | ||
line = lineAsObject; | ||
} | ||
line = lineAsObject; | ||
} | ||
finish = (function(line) { | ||
var k, v; | ||
if (csv.state.count === 1 && csv.options.to.header === true) { | ||
columns = csv.options.to.columns || csv.options.from.columns; | ||
if (typeof columns === 'object') { | ||
columns = (function() { | ||
var _results; | ||
_results = []; | ||
for (k in columns) { | ||
v = columns[k]; | ||
_results.push(v); | ||
} | ||
return _results; | ||
})(); | ||
} | ||
csv.stringifier.write(columns); | ||
} | ||
csv.stringifier.write(line); | ||
if (csv.state.transforming === 0 && this.closed === true) { | ||
return this.emit('end', csv.state.count); | ||
} | ||
}).bind(this); | ||
csv.state.count++; | ||
if (this.callback) { | ||
this.csv.state.transforming = true; | ||
try { | ||
line = this.callback(line, this.csv.state.count); | ||
} catch (e) { | ||
return this.csv.error(e); | ||
sync = this.callback.length !== 3; | ||
csv.state.transforming++; | ||
done = function(err, line) { | ||
var isObject; | ||
if (err) { | ||
return csv.error(err); | ||
} | ||
isObject = typeof line === 'object' && !Array.isArray(line); | ||
if (csv.options.to.newColumns && !csv.options.to.columns && isObject) { | ||
Object.keys(line).filter(function(column) { | ||
return columns.indexOf(column) === -1; | ||
}).forEach(function(column) { | ||
return columns.push(column); | ||
}); | ||
} | ||
csv.state.transforming--; | ||
return finish(line); | ||
}; | ||
if (sync) { | ||
try { | ||
return done(null, this.callback(line, csv.state.count - 1)); | ||
} catch (err) { | ||
return done(err); | ||
} | ||
} else { | ||
try { | ||
return this.callback(line, csv.state.count - 1, function(err, line) { | ||
return done(err, line); | ||
}); | ||
} catch (_error) {} | ||
} | ||
isObject = typeof line === 'object' && !Array.isArray(line); | ||
if (this.csv.options.to.newColumns && !this.csv.options.to.columns && isObject) { | ||
Object.keys(line).filter(function(column) { | ||
return columns.indexOf(column) === -1; | ||
}).forEach(function(column) { | ||
return columns.push(column); | ||
}); | ||
} | ||
this.csv.state.transforming = false; | ||
} else { | ||
return finish(line); | ||
} | ||
if (this.csv.state.count === 0 && this.csv.options.to.header === true) { | ||
this.csv.stringifier.write(this.csv.options.to.columns || columns); | ||
}; | ||
/* no doc | ||
`transformer(csv).end()` | ||
------------------------ | ||
A transformer instance extends the EventEmitter and | ||
emit the 'end' event when the last callback is called. | ||
*/ | ||
Transformer.prototype.end = function() { | ||
if (this.closed) { | ||
return this.csv.error(new Error('Transformer already closed')); | ||
} | ||
this.csv.stringifier.write(line); | ||
return this.csv.state.count++; | ||
this.closed = true; | ||
if (this.csv.state.transforming === 0) { | ||
return this.emit('end'); | ||
} | ||
}; | ||
@@ -102,0 +209,0 @@ |
{ | ||
"name": "csv", | ||
"version": "0.2.0", | ||
"version": "0.2.1", | ||
"description": "CSV parser with simple api, full of options and tested against large datasets.", | ||
@@ -5,0 +5,0 @@ "author": "David Worms <david@adaltas.com>", |
@@ -0,1 +1,3 @@ | ||
[![Build Status](https://secure.travis-ci.org/wdavidw/node-csv-parser.png)](http://travis-ci.org/wdavidw/node-csv-parser) | ||
<pre> | ||
@@ -11,3 +13,3 @@ _ _ _ _____ _______ __ | ||
[Documentation is for the parser is available here](http://localhost:4000/projects/node-csv/). | ||
[Documentation is for the parser is available here](http://www.adaltas.com/projects/node-csv/). | ||
@@ -23,2 +25,44 @@ Important | ||
Quick example | ||
------------- | ||
```javascript | ||
// node samples/string.js | ||
var csv = require('csv'); | ||
csv() | ||
.from( '"1","2","3","4"\n"a","b","c","d"' ) | ||
.to( console.log ) | ||
// Output: | ||
// 1,2,3,4 | ||
// a,b,c,d | ||
``` | ||
Advanced example | ||
---------------- | ||
```javascript | ||
// node samples/sample.js | ||
var csv = require('csv'); | ||
csv() | ||
.from.stream(fs.createReadStream(__dirname+'/sample.in')) | ||
.to.path(__dirname+'/sample.out') | ||
.transform( function(data){ | ||
data.unshift(data.pop()); | ||
return data; | ||
}) | ||
.on('record', function(data,index){ | ||
console.log('#'+index+' '+JSON.stringify(data)); | ||
}) | ||
.on('end', function(count){ | ||
console.log('Number of lines: '+count); | ||
}) | ||
.on('error', function(error){ | ||
console.log(error.message); | ||
}); | ||
// Output: | ||
// #0 ["2000-01-01","20322051544","1979.0","8.8017226E7","ABC","45"] | ||
// #1 ["2050-11-27","28392898392","1974.0","8.8392926E7","DEF","23"] | ||
// Number of lines: 2 | ||
``` | ||
Migration | ||
@@ -49,2 +93,4 @@ --------- | ||
The test suite is run online with [Travis][travis] against Node.js version 0.6, 0.7, 0.8 and 0.9. | ||
Contributors | ||
@@ -66,2 +112,3 @@ ------------ | ||
* Edmund von der Burg: <https://github.com/evdb> | ||
* Douglas Christopher Wilson: <https://github.com/dougwilson> | ||
@@ -74,1 +121,3 @@ Related projects | ||
[travis]: https://travis-ci.org/#!/wdavidw/node-csv-parser | ||
@@ -8,6 +8,6 @@ | ||
csv() | ||
.fromPath(__dirname+'/columns.in', { | ||
.from.path(__dirname+'/columns.in', { | ||
columns: true | ||
}) | ||
.toStream(process.stdout, { | ||
.to.stream(process.stdout, { | ||
columns: ['id', 'name'], | ||
@@ -14,0 +14,0 @@ end: false |
@@ -19,4 +19,4 @@ // CSV sample - Copyright jon seymour jon.seymour@gmail.com | ||
csv() | ||
.fromStream(process.stdin) | ||
.toStream(process.stdout, {end: false}) | ||
.from.stream(process.stdin) | ||
.to.stream(process.stdout, {end: false}) | ||
.transform(function(data){ | ||
@@ -23,0 +23,0 @@ if (header) { |
@@ -8,6 +8,6 @@ | ||
csv() | ||
.fromPath(__dirname+'/columns.in',{ | ||
.from.path(__dirname+'/columns.in',{ | ||
columns: true | ||
}) | ||
.toStream(process.stdout, { | ||
.to.stream(process.stdout, { | ||
newColumns: true, | ||
@@ -14,0 +14,0 @@ end: false |
@@ -7,4 +7,4 @@ | ||
csv() | ||
.fromPath(__dirname+'/sample.in') | ||
.toPath(__dirname+'/sample.out') | ||
.from.path(__dirname+'/sample.in') | ||
.to.path(__dirname+'/sample.out') | ||
.transform(function(data){ | ||
@@ -14,9 +14,9 @@ data.unshift(data.pop()); | ||
}) | ||
.on('data',function(data,index){ | ||
.on('record', function(data, index){ | ||
console.log('#'+index+' '+JSON.stringify(data)); | ||
}) | ||
.on('end',function(count){ | ||
.on('end', function(count){ | ||
console.log('Number of lines: '+count); | ||
}) | ||
.on('error',function(error){ | ||
.on('error', function(error){ | ||
console.log(error.message); | ||
@@ -23,0 +23,0 @@ }); |
@@ -8,4 +8,4 @@ | ||
csv() | ||
.fromPath(__dirname+'/transform.in') | ||
.toStream(process.stdout) | ||
.from('82,Preisner,Zbigniew\n94,Gainsbourg,Serge') | ||
.to(console.log) | ||
.transform(function(data,index){ | ||
@@ -12,0 +12,0 @@ return (index>0 ? ',' : '') + data[0] + ":" + data[2] + ' ' + data[1]; |
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
Major refactor
Supply chain riskPackage has recently undergone a major refactor. It may be unstable or indicate significant internal changes. Use caution when updating to versions that include significant changes.
Found 1 instance in 1 package
292591
143
1505
119
0