IPFS unixFS Engine


JavaScript implementation of the layout and chunking mechanisms used by IPFS
Table of Contents
BEWARE BEWARE BEWARE there might be 🐉
This module has passed through several iterations and still is far from a nice and easy understandable codebase. Currently missing features:
Install
With npm installed, run
$ npm install ipfs-unixfs-engine
Usage
Example Importer
Let's create a little directory to import:
$ cd /tmp
$ mkdir foo
$ echo 'hello' > foo/bar
$ echo 'world' > foo/quux
And write the importing logic:
const memStore = require('abstract-blob-store')
const Repo = require('ipfs-repo')
const Block = require('ipfs-block')
const BlockService = require('ipfs-block-service')
const MerkleDag = require('ipfs-merkle-dag')
const fs = require('fs')
const repo = new Repo('', { stores: memStore })
const blockService = new BlockService(repo)
const dagService = new ipfsMerkleDag.DAGService(blocks)
const Importer = require('ipfs-unixfs-engine').Importer
const filesAddStream = new Importer(dagService)
const res = []
const rs = fs.createReadStream(file)
const rs2 = fs.createReadStream(file2)
const input = {path: /tmp/foo/bar, content: rs}
const input2 = {path: /tmp/foo/quxx, content: rs2}
filesAddStream.on('data', (info) => {
res.push(info)
})
filesAddStream.on('end', () => {
console.log('Finished filesAddStreaming files!')
})
filesAddStream.write(input)
filesAddStream.write(input2)
filesAddStream.end()
When run, the stat of DAG Node is outputted for each file on data event until the root:
{ multihash: <Buffer 12 20 bd e2 2b 57 3f 6f bd 7c cc 5a 11 7f 28 6c a2 9a 9f c0 90 e1 d4 16 d0 5f 42 81 ec 0c 2a 7f 7f 93>,
size: 39243,
path: '/tmp/foo/bar' }
{ multihash: <Buffer 12 20 bd e2 2b 57 3f 6f bd 7c cc 5a 11 7f 28 6c a2 9a 9f c0 90 e1 d4 16 d0 5f 42 81 ec 0c 2a 7f 7f 93>,
size: 59843,
path: '/tmp/foo/quxx' }
{ multihash: <Buffer 12 20 bd e2 2b 57 3f 6f bd 7c cc 5a 11 7f 28 6c a2 9a 9f c0 90 e1 d4 16 d0 5f 42 81 ec 0c 2a 7f 7f 93>,
size: 93242,
path: '/tmp/foo' }
{ multihash: <Buffer 12 20 bd e2 2b 57 3f 6f bd 7c cc 5a 11 7f 28 6c a2 9a 9f c0 90 e1 d4 16 d0 5f 42 81 ec 0c 2a 7f 7f 93>,
size: 94234,
path: '/tmp' }
Importer API
const Importer = require('ipfs-unixfs-engine').importer
const add = new Importer(dag)
The importer is a object Transform stream that accepts objects of the form
{
path: 'a name',
content: (Buffer or Readable stream)
}
The stream will output IPFS DAG Node stats for the nodes as they are added to
the DAG Service. When stats on a node are emitted they are guaranteed to have
been written into the DAG Service's storage mechanism.
The input's file paths and directory structure will be preserved in the DAG
Nodes.
Importer options
In the second argument of the importer constructor you can specify the following options:
chunker
(string, defaults to "fixed"
): the chunking strategy. Now only supports "fixed"
chunkerOptions
(object, optional): the options for the chunker. Defaults to an object with the following properties:
maxChunkSize
(positive integer, defaults to 262144
): the maximum chunk size for the fixed
chunker.
strategy
(string, defaults to "balanced"
): the DAG builder strategy name. Supports:
flat
: flat list of chunks
balanced
: builds a balanced tree
trickle
: builds a trickle tree
maxChildrenPerNode
(positive integer, defaults to 174
): the maximum children per node for the balanced
and trickle
DAG builder strategies
layerRepeat
(positive integer, defaults to 4): (only applicable to the trickle
DAG builder strategy). The maximum repetition of parent nodes for each layer of the tree.
reduceSingleLeafToSelf
(boolean, defaults to false
): optimization for, when reducing a set of nodes with one node, reduce it to that node.
Example Exporter
const Repo = require('ipfs-repo')
const Block = require('ipfs-block')
const BlockService = require('ipfs-block-service')
const MerkleDAG = require('ipfs-merkle-dag')
const repo = new Repo('', { stores: memStore })
const blockService = new BlockService(repo)
const dagService = new MerkleDag.DAGService(blockService)
// Create an export readable object stream with the hash you want to export and a dag service
const filesStream = Exporter(<multihash>, dag)
// Pipe the return stream to console
filesStream.on('data', (file) => {
file.content.pipe(process.stdout)
}
Exporter: API
const Exporter = require('ipfs-unixfs-engine').Exporter
new Exporter(hash, dagService)
Uses the given DAG Service to fetch an IPFS UnixFS object(s) by their multiaddress.
Creates a new readable stream in object mode that outputs objects of the form
{
path: 'a name',
content: (Buffer or Readable stream)
}
Errors are received as with a normal stream, by listening on the 'error'
event to be emitted.
Contribute
Feel free to join in. All welcome. Open an issue!
This repository falls under the IPFS Code of Conduct.

License
MIT