Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

kappa-core

Package Overview
Dependencies
Maintainers
1
Versions
30
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

kappa-core - npm Package Compare versions

Comparing version 3.0.1 to 3.0.2

.travis.yml

2

example.js

@@ -27,3 +27,3 @@ var kappa = require('.')

core.feed('default', function (err, feed) {
core.writer('default', function (err, feed) {
feed.append(1, function (err) {

@@ -30,0 +30,0 @@ core.api.sum.get(function (err, value) {

@@ -27,10 +27,8 @@ var inherits = require('inherits')

}
var idx = indexer({
var idx = indexer(Object.assign({}, view, {
log: this._logs,
version: version,
maxBatch: view.maxBatch || 10,
batch: view.map,
fetchState: view.fetchState,
storeState: view.storeState
})
batch: view.map
}))
idx.on('error', function (err) {

@@ -37,0 +35,0 @@ self.emit('error', err)

{
"name": "kappa-core",
"description": "a small core for append-only log based programs",
"description": "Minimal peer-to-peer database, based on kappa architecture.",
"author": "Stephen Whitmore <sww@eight.net>",
"version": "3.0.1",
"version": "3.0.2",
"repository": {

@@ -7,0 +7,0 @@ "url": "git://github.com/noffle/kappa-core.git"

# kappa-core
> a small core for append-only log based programs
> Minimal peer-to-peer database, based on kappa architecture.
A lot like [flumedb][flumedb], but using
[multifeed](https://github.com/noffle/multifeed) as an append-only log base,
which is actually a *set* of append-only logs.
## Example
Pronounced *"capricorn"*.
This example sets up an on-disk log store and an in-memory view store. The view
tallies the sum of all of the numbers in the logs, and provides an API for
getting that sum.
## Status
*Experimental*, but functional.
## Usage
```js
var kappa = require('kappa-core')
var view = require('kappa-view')
var memdb = require('memdb')
// Store logs in a directory called "log". Store views in memory.
var core = kappa('./log', { valueEncoding: 'json' })
var idx = memdb()
var store = memdb()
var sum = 0
// View definition
var sumview = view(store, function (db) {
var sumview = {
// Called with a batch of log entries to be processed by the view.
// No further entries are processed by this view until 'next()' is called.
map: function (entries, next) {
db.get('sum', function (err, value) {
var sum
if (err && err.notFound) sum = 0
else if (err) return next(err)
else sum = value
})
entries.forEach(function (entry) {
if (typeof entry.value === 'number') sum += entry.value
})
db.put('sum', sum, next)
}
// Whatever is defined in the "api" object is publicly accessible
api: {
get: function (core, cb) {
this.ready(function () {
this.ready(function () { // wait for all views to catch up
cb(null, sum)

@@ -34,20 +46,9 @@ })

},
map: function (msgs, next) {
msgs.forEach(function (msg) {
if (typeof msg.value === 'number') sum += msg.value
})
next()
},
})
// where to store and fetch the indexer's state (which log entries have been
// processed so far)
storeState: function (state, cb) { idx.put('state', state, cb) },
fetchState: function (cb) { idx.get('state', cb) }
}
// the api will be mounted at core.api.sum
core.use('sum', 1, sumview) // name the view 'sum' and consider the 'sumview' logic as version 1
core.feed('default', function (err, feed) {
feed.append(1, function (err) {
core.writer('default', function (err, writer) {
writer.append(1, function (err) {
core.api.sum.get(function (err, value) {

@@ -77,8 +78,8 @@ console.log(value) // 1

- `valueEncoding`: a string describing how the data will be encoded.
- `multifeed`: A preconfigured instance of [noffle/multifeed](https://github.com/noffle/multifeed)
- `multifeed`: A preconfigured instance of [multifeed](https://github.com/kappa-db/multifeed)
### core.writer(name, cb)
Get a local writable feed called `name`. If it already exists, it is returned,
otherwise **it is created**. A feed is an instance of
Get or create a local writable log called `name`. If it already exists, it is
returned, otherwise it is created. A writer is an instance of
[hypercore](https://github.com/mafintosh/hypercore).

@@ -88,8 +89,8 @@

Fetch a feed by **its key** `key` (a `Buffer` or hex string).
Fetch a log / feed by its **public key** (a `Buffer` or hex string).
### var feeds = core.feeds()
An array of all hypercores in the kappa-core. Check a feed's `key` to
find the one you want, or check its `writable` / `readable` properties.
An array of all hypercores in the kappa-core. Check a feed's `key` to find the
one you want, or check its `writable` / `readable` properties.

@@ -104,10 +105,8 @@ Only populated once `core.ready(fn)` is fired.

```js
// All are optional except "map"
{
api: {
someSyncFunction: function (core) { return ... },
someAsyncFunction: function (core, cb) { process.nextTick(cb, ...) }
},
map: function (msgs, next) {
msgs.forEach(function (msg) {
// Process each batch of entries
map: function (entries, next) {
entries.forEach(function (entry) {
// ...

@@ -118,9 +117,24 @@ })

// Your useful functions for users of this view to call
api: {
someSyncFunction: function (core) { return ... },
someAsyncFunction: function (core, cb) { process.nextTick(cb, ...) }
},
// Save progress state so processing can resume on later runs of the program.
// Not required if you're using the "kappa-view" module, which handles this for you.
fetchState: function (cb) { ... },
storeState: function (state, cb) { ... }
storeState: function (state, cb) { ... },
clearState: function (cb) { ... }
// Runs after each batch of entries is done processing and progress is persisted
indexed: function (entries) { ... },
// Number of entries to process in a batch
maxBatch: 100,
}
```
The kappa-core instance `core` is always is bound to `this` in all of the `api`
functions you define.
**NOTE**: The kappa-core instance `core` is always passed as the fist parameter
in all of the `api` functions you define.

@@ -131,15 +145,15 @@ `version` is an integer that represents what version you want to consider the

regenerated again from scratch. This provides a means to change the logic or
data structures of a view over time in a way that is future-compatible.
data structure of a view over time in a way that is future-compatible.
The `{fetch,store}State` functions are optional: they tell the view where to
store its state information about what log entries have been indexed thus far.
If not passed in, they will be stored in memory (i.e. reprocessed on each fresh
run of the program). You can use any backend you want (like leveldb) to store
the `Buffer` object `state`.
The `fetchState`, `storeState`, and `clearState` functions are optional: they
tell the view where to store its state information about what log entries have
been indexed thus far. If not passed in, they will be stored in memory (i.e.
reprocessed on each fresh run of the program). You can use any backend you want
(like leveldb) to store the `Buffer` object `state`. If you use a module like
[kappa-view](https://github.com/kappa-db/kappa-view), it will handle state
management on your behalf.
There are also the following optional `opts`:
`indexed` is an optional function to run whenever a new batch of entries have
been indexed and written to storage. Receives an array of entries.
- `inedxed`: a function to run whenever a new batch of messages have been
indexed & written to storage. Receives an array of messages.
### core.ready(viewNames, cb)

@@ -173,3 +187,3 @@

Create a duplex replication stream. `opts` are passed in to
[multifeed](https://github.com/noffle/multifeed)'s API of the same name.
[multifeed](https://github.com/kappa-db/multifeed)'s API of the same name.

@@ -190,5 +204,6 @@ ### core.on('error', function (err) {})

## Useful view modules
## Useful helper modules
Here are some useful modules that play well with kappa-core for building views:
Here are some useful modules that play well with kappa-core for building
materialized views:

@@ -201,8 +216,6 @@ - [unordered-materialized-bkd](https://github.com/digidem/unordered-materialized-bkd): spatial index

[flumedb][flumedb] presents an ideal small core API for an append-only log:
append new data, and build (versioned) views over it. kappa-core copies this
gleefully, but with two major differences:
kappa-core is built atop two major building blocks:
1. [hypercore][hypercore] is used for feed (append-only log) storage
2. views are built in out-of-order sequence
1. [hypercore][hypercore], which is used for (append-only) log storage
2. materialized views, which are built by traversing logs in potentially out-of-order sequence

@@ -212,11 +225,11 @@ hypercore provides some very useful superpowers:

1. all data is cryptographically associated with a writer's public key
2. partial replication: parts of feeds can be selectively sync'd between peers,
2. partial replication: parts of logs can be selectively sync'd between peers,
instead of all-or-nothing, without loss of cryptographic integrity
Building views in arbitrary sequence is more challenging than when order is
known to be topographic, but confers some benefits:
known to be topographic or sorted in some way, but confers some benefits:
1. most programs are only interested in the latest values of data; the long tail
of history can be traversed asynchronously at leisure after the tips of the
feeds are processed
logs are processed
2. the views are tolerant of partially available data. Many of the modules

@@ -239,3 +252,3 @@ listed in the section below depend on *topographic completeness*: all entries

- [hyperlog](https://github.com/mafintosh/hyperlog)
- a harmonious meshing of ideas with @substack in spain
- a harmonious meshing of ideas with @substack in the south of spain

@@ -242,0 +255,0 @@ ## Further Reading

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc