linear-algebra

Efficient, high-performance linear algebra library for node.js and browsers.
This is a low-level algebra library which supports basic vector and matrix operations, and has been designed with machine learning algorithms in mind.
Features:
Installation
node.js
Install using npm:
$ npm install linear-algebra
Browser
Use bower:
$ bower install linear-algebra
In the browser the library is exposed via the linearAlgebra()
function.
How to use
The examples below assume you are running in node.js. The library needs to be initialised once loaded:
var linearAlgebra = require('linear-algebra')(),
Vector = linearAlgebra.Vector,
Matrix = linearAlgebra.Matrix;
Note that both matrices and vectors are represented by Matrix
instances. The Vector
object simply contains helpers to create single-row Matrix
objects.
In-place methods
Matrix operations which result in a new matrix are implemented as two methods - a default method which returns a new Matrix
instance and an in-place method which causes the original to be overwritten. In some cases you may obtain better performance if you switch to the in-place version, and vice versa.
The in-place version of a method is named as the original method but with an additional _
suffix:
var m = new Matrix([ [1, 2, 3], [4, 5, 6] ]);
var m2 = m.mul(5);
m2 === m1;
var m2 = m.mul_(5);
m2 === m1;
Using the in-place version of a method may not always yield a performance improvement. You can run the performance benchmarks to see examples of this.
API
var m, m2, m3;
m = new Matrix([ [1, 2, 3], [4, 5, 6] ]);
console.log( m.rows );
console.log( m.cols );
console.log( m.data );
m = Matrix.identity(3);
console.log( m.data );
m = Matrix.scalar(3, 9);
console.log( m.data );
m = Vector.zero(5);
console.log( m.data );
m = new Matrix([ [1, 2, 3], [4, 5, 6] ]);
m2 = m.trans();
console.log(m2.data);
m = new Matrix([ [1, 2, 3], [4, 5, 6] ]);
m2 = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
m3 = m.dot(m2);
console.log(m3.data);
m = new Matrix([ [10, 20], [30, 40], [50, 60] ]);
m2 = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
m3 = m.mul(m2);
console.log(m3.data);
m = new Matrix([ [10, 20], [30, 40], [50, 60] ]);
m2 = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
m3 = m.plus(m2);
console.log(m3.data);
m = new Matrix([ [10, 20], [30, 40], [50, 60] ]);
m2 = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
m3 = m.minus(m2);
console.log(m3.data);
m = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
m2 = m.log();
console.log(m2.data);
m = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
m2 = m.sigmoid();
console.log(m2.data);
m = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
m2 = m.plusEach(5);
console.log(m2.data);
m = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
m2 = m.mulEach(5);
console.log(m2.data);
m = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
m2 = m.map(function(v) {
return v - 1;
});
console.log(m2.data);
m = new Matrix([ [1, 2], [3, 4], [5, 6] ]);
console.log(m.getSum());
Higher precision
When adding floating point numbers together the end result is sometimes off by a minor decimal point (to see this try 0.1 + 0.2
in your JS console).
This module allows you to supply a custom adder (e.g. add
) as an option to the initialization call.
In node.js:
var linAlg = require('linear-algebra')({
add: require('add')
}),
Vector = linAlg.Vector,
Matrix = linAlg.Matrix;
In the browser you will need to load in the higher-precision version of the library to be able to do this:
<script type="text/javascript" src="add.js" />
<script type="text/javascript" src="linear-algebra.precision.js" />
<script type="text/javascript">
var linAlg = linearAlgebra({
add: add
}),
Vector = linAlg.Vector,
Matrix = linAlg.Matrix;
</script>
Note: If you use the higher-precision version of the library with a custom adder then expect performance to drop significantly for some matrix operations.
Performance
To run the performance benchmarks:
$ npm install -g gulp
$ npm install
$ gulp benchmark
As mentioned earlier, matrix operations which result in a new matrix are implemented as two methods - a default method which returns a new Matrix
instance and an in-place method which causes the original to be overwritten.
The in-place versions are provided because in general, overwriting an existing array is twice as fast as creating a new one. However, this may not be true for all the matrix operations contained in this library.
If you're dealing with large matrices (>100 rows, columns) then you're more likely to see a benefit from using the in-place versions of methods:
[14:38:35] Running suite Default (new object) vs in-place modification [/Users/home/dev/js/linear-algebra/benchmark/default-vs-in-place.perf.js]...
[14:38:41] Matrix dot-product (5x5) - default x 1,114,666 ops/sec ±0.94% (96 runs sampled)
[14:38:46] Matrix dot-product (5x5) - in-place x 721,296 ops/sec ±2.95% (94 runs sampled)
[14:38:52] Matrix dot-product (100x100) - default x 269 ops/sec ±3.75% (88 runs sampled)
[14:38:57] Matrix dot-product (100x100) - in-place x 283 ops/sec ±0.94% (93 runs sampled)
[14:39:09] Matrix dot-product (500x500) - default x 1.40 ops/sec ±9.96% (8 runs sampled)
[14:39:20] Matrix dot-product (500x500) - in-place x 1.45 ops/sec ±4.30% (8 runs sampled)
[14:39:26] Matrix transpose (1000x5) - default x 13,770 ops/sec ±3.00% (91 runs sampled)
[14:39:31] Matrix transpose (1000x5) - in-place x 9,736 ops/sec ±2.44% (87 runs sampled)
[14:39:37] Multiple matrix operations - default x 218 ops/sec ±2.57% (88 runs sampled)
[14:39:42] Multiple matrix operations - in-place x 222 ops/sec ±0.71% (89 runs sampled)
Building
To build the code and run the tests:
$ npm install -g gulp
$ npm install
$ gulp
Contributing
Contributions are welcome! Please see CONTRIBUTING.md.
License
MIT - see LICENSE.md