Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@guildofweavers/genstark

Package Overview
Dependencies
Maintainers
1
Versions
23
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@guildofweavers/genstark - npm Package Compare versions

Comparing version 0.5.2 to 0.5.3

15

genstark.d.ts

@@ -11,3 +11,3 @@ declare module '@guildofweavers/genstark' {

export { FiniteField, createPrimeField, Vector, Matrix } from '@guildofweavers/galois';
export { MerkleTree, BatchMerkleProof, HashAlgorithm, getHashDigestSize } from '@guildofweavers/merkle';
export { MerkleTree, BatchMerkleProof, HashAlgorithm, createHash, Hash } from '@guildofweavers/merkle';

@@ -46,8 +46,17 @@ // STARK

* @param security Security options for the STARK instance
* @param optimization WebAssembly optimization options; null turns optimization off
* @param optimization A flag indicating whether WebAssembly-based optimization should be enabled
* @param logger Optional logger; defaults to console logging; set to null to disable
*/
constructor(source: string, security?: Partial<SecurityOptions>, optimization?: Partial<OptimizationOptions> | null, logger?: Logger);
constructor(source: string, security?: Partial<SecurityOptions>, optimization?: boolean, logger?: Logger);
/**
* Creates a STARK instance based on the provided parameters
* @param source AirScript source for the STARK
* @param security Security options for the STARK instance
* @param optimization WebAssembly optimization options
* @param logger Optional logger; defaults to console logging; set to null to disable
*/
constructor(source: string, security?: Partial<SecurityOptions>, optimization?: Partial<OptimizationOptions>, logger?: Logger);
/**
* Generate a proof of computation for this STARK

@@ -54,0 +63,0 @@ * @param assertions Boundary constraints for the computation

2

index.js

@@ -11,5 +11,5 @@ "use strict";

exports.MerkleTree = merkle_1.MerkleTree;
exports.getHashDigestSize = merkle_1.getHashDigestSize;
exports.createHash = merkle_1.createHash;
var galois_1 = require("@guildofweavers/galois");
exports.createPrimeField = galois_1.createPrimeField;
//# sourceMappingURL=index.js.map

@@ -11,5 +11,5 @@ "use strict";

// --------------------------------------------------------------------------------------------
constructor(field, indexGenerator, hasAlgorithm) {
constructor(field, indexGenerator, hash) {
this.field = field;
this.hashAlgorithm = hasAlgorithm;
this.hash = hash;
this.indexGenerator = indexGenerator;

@@ -20,4 +20,5 @@ }

prove(lTree, values, domain, maxDegreePlus1) {
const componentCount = Math.min(Math.ceil(Math.log2(values.length) / 2) - 4, 0);
const result = {
components: new Array(),
components: new Array(componentCount),
remainder: []

@@ -42,3 +43,3 @@ };

// verify Merkle proof for the column
if (!merkle_1.MerkleTree.verifyBatch(columnRoot, positions, columnProof, this.hashAlgorithm)) {
if (!merkle_1.MerkleTree.verifyBatch(columnRoot, positions, columnProof, this.hash)) {
throw new StarkError_1.StarkError(`Verification of column Merkle proof failed at depth ${depth}`);

@@ -55,3 +56,3 @@ }

// verify Merkle proof for polynomials
if (!merkle_1.MerkleTree.verifyBatch(lRoot, polyPositions, polyProof, this.hashAlgorithm)) {
if (!merkle_1.MerkleTree.verifyBatch(lRoot, polyPositions, polyProof, this.hash)) {
throw new StarkError_1.StarkError(`Verification of polynomial Merkle proof failed at depth ${depth}`);

@@ -100,3 +101,3 @@ }

// check that Merkle root matches up
const cTree = merkle_1.MerkleTree.create(proof.remainder, this.hashAlgorithm);
const cTree = merkle_1.MerkleTree.create(proof.remainder, this.hash);
if (!cTree.root.equals(lRoot)) {

@@ -116,3 +117,3 @@ throw new StarkError_1.StarkError(`Remainder values do not match Merkle root of the last column`);

this.verifyRemainder(values, maxDegreePlus1, rootOfUnity);
result.remainder = lTree.values;
result.remainder = lTree.getLeaves();
return;

@@ -130,4 +131,5 @@ }

// put the resulting column into a merkle tree
const hashDigestSize = merkle_1.getHashDigestSize(this.hashAlgorithm);
const cTree = merkle_1.MerkleTree.create(utils_1.vectorToBuffers(column, hashDigestSize), this.hashAlgorithm);
const cTree = merkle_1.MerkleTree.create(column, this.hash);
// recursively build all other components
this.fri(cTree, column, Math.floor(maxDegreePlus1 / 4), depth + 1, domain, result);
// compute spot check positions in the column and corresponding positions in the original values

@@ -143,10 +145,8 @@ const columnLength = column.length;

}
// build this component of the proof
result.components.push({
// build and add proof component to the result
result.components[depth] = {
columnRoot: cTree.root,
columnProof: cTree.proveBatch(positions),
polyProof: lTree.proveBatch(polyPositions)
});
// recursively build all other components
this.fri(cTree, column, Math.floor(maxDegreePlus1 / 4), depth + 1, domain, result);
};
}

@@ -153,0 +153,0 @@ verifyRemainder(remainder, maxDegreePlus1, rootOfUnity) {

"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
const merkle_1 = require("@guildofweavers/merkle");
const utils = require("./utils");

@@ -10,6 +9,7 @@ // CLASS DEFINITION

// --------------------------------------------------------------------------------------------
constructor(config) {
constructor(config, hashDigestSize) {
this.fieldElementSize = config.field.elementSize;
this.stateWidth = config.stateWidth;
this.secretInputCount = config.secretInputCount;
this.hashDigestSize = hashDigestSize;
}

@@ -46,5 +46,4 @@ // EVALUATION SERIALIZER/PARSER

// --------------------------------------------------------------------------------------------
serializeProof(proof, hashAlgorithm) {
const nodeSize = merkle_1.getHashDigestSize(hashAlgorithm);
const size = utils.sizeOf(proof, hashAlgorithm);
serializeProof(proof) {
const size = utils.sizeOf(proof, this.hashDigestSize);
const buffer = Buffer.allocUnsafe(size.total);

@@ -57,7 +56,7 @@ let offset = 0;

offset = buffer.writeUInt8(proof.evProof.depth, offset);
offset = utils.writeMatrix(buffer, offset, proof.evProof.nodes);
offset = utils.writeMatrix(buffer, offset, proof.evProof.nodes, this.hashDigestSize);
// lcProof
offset += proof.lcProof.root.copy(buffer, offset);
offset = buffer.writeUInt8(proof.lcProof.depth, offset);
offset = utils.writeMatrix(buffer, offset, proof.lcProof.nodes);
offset = utils.writeMatrix(buffer, offset, proof.lcProof.nodes, this.fieldElementSize);
// ldProof

@@ -68,4 +67,4 @@ offset = buffer.writeUInt8(proof.ldProof.components.length, offset);

offset += component.columnRoot.copy(buffer, offset);
offset = utils.writeMerkleProof(buffer, offset, component.columnProof, nodeSize);
offset = utils.writeMerkleProof(buffer, offset, component.polyProof, nodeSize);
offset = utils.writeMerkleProof(buffer, offset, component.columnProof, this.fieldElementSize);
offset = utils.writeMerkleProof(buffer, offset, component.polyProof, this.fieldElementSize);
}

@@ -76,4 +75,3 @@ offset = utils.writeArray(buffer, offset, proof.ldProof.remainder);

}
parseProof(buffer, hashAlgorithm) {
const nodeSize = merkle_1.getHashDigestSize(hashAlgorithm);
parseProof(buffer) {
let offset = 0;

@@ -86,14 +84,14 @@ // values

// evProof
const evRoot = Buffer.allocUnsafe(nodeSize);
offset += buffer.copy(evRoot, 0, offset, offset + nodeSize);
const evRoot = Buffer.allocUnsafe(this.hashDigestSize);
offset += buffer.copy(evRoot, 0, offset, offset + this.hashDigestSize);
const evDepth = buffer.readUInt8(offset);
offset += 1;
const evNodeInfo = utils.readMatrix(buffer, offset, nodeSize);
const evNodeInfo = utils.readMatrix(buffer, offset, this.hashDigestSize, this.hashDigestSize);
offset = evNodeInfo.offset;
// lcProof
const lcRoot = Buffer.allocUnsafe(nodeSize);
offset += buffer.copy(lcRoot, 0, offset, offset + nodeSize);
const lcRoot = Buffer.allocUnsafe(this.hashDigestSize);
offset += buffer.copy(lcRoot, 0, offset, offset + this.hashDigestSize);
const lcDepth = buffer.readUInt8(offset);
offset += 1;
const lcNodeInfo = utils.readMatrix(buffer, offset, nodeSize);
const lcNodeInfo = utils.readMatrix(buffer, offset, this.fieldElementSize, this.hashDigestSize);
offset = lcNodeInfo.offset;

@@ -105,11 +103,11 @@ // ldProof

for (let i = 0; i < componentCount; i++) {
let columnRoot = Buffer.allocUnsafe(nodeSize);
offset += buffer.copy(columnRoot, 0, offset, offset + nodeSize);
let columnProofInfo = utils.readMerkleProof(buffer, offset, nodeSize);
let columnRoot = Buffer.allocUnsafe(this.hashDigestSize);
offset += buffer.copy(columnRoot, 0, offset, offset + this.hashDigestSize);
let columnProofInfo = utils.readMerkleProof(buffer, offset, this.fieldElementSize, this.hashDigestSize);
offset = columnProofInfo.offset;
let polyProofInfo = utils.readMerkleProof(buffer, offset, nodeSize);
let polyProofInfo = utils.readMerkleProof(buffer, offset, this.fieldElementSize, this.hashDigestSize);
offset = polyProofInfo.offset;
components[i] = { columnRoot, columnProof: columnProofInfo.proof, polyProof: polyProofInfo.proof };
}
const remainderInfo = utils.readArray(buffer, offset, nodeSize);
const remainderInfo = utils.readArray(buffer, offset, this.fieldElementSize);
offset = remainderInfo.offset;

@@ -116,0 +114,0 @@ // build and return the proof

@@ -15,3 +15,6 @@ "use strict";

const MAX_FRI_QUERY_COUNT = 64;
const HASH_ALGORITHMS = ['sha256', 'blake2s256', 'wasmBlake2s256'];
const WASM_PAGE_SIZE = 65536; // 64 KB
const DEFAULT_INITIAL_MEMORY = 32 * 2 ** 20; // 32 MB
const DEFAULT_MAXIMUM_MEMORY = 2 * 2 ** 30 - WASM_PAGE_SIZE; // 2 GB - one page
const HASH_ALGORITHMS = ['sha256', 'blake2s256'];
const DEFAULT_HASH_ALGORITHM = 'sha256';

@@ -28,8 +31,24 @@ // CLASS DEFINITION

throw new TypeError('Source script cannot be an empty string');
const vOptions = validateSecurityOptions(security);
this.air = air_script_1.parseScript(source, undefined, { extensionFactor: vOptions.extensionFactor, wasmOptions: optimization });
this.indexGenerator = new components_1.QueryIndexGenerator(this.air.extensionFactor, vOptions);
this.hashAlgorithm = vOptions.hashAlgorithm;
this.ldProver = new components_1.LowDegreeProver(this.air.field, this.indexGenerator, this.hashAlgorithm);
this.serializer = new Serializer_1.Serializer(this.air);
const sOptions = validateSecurityOptions(security);
if (optimization) {
const wasmOptions = buildWasmOptions(optimization);
// instantiate AIR object
this.air = air_script_1.parseScript(source, undefined, { extensionFactor: sOptions.extensionFactor, wasmOptions });
if (!this.air.field.isOptimized) {
console.warn(`WARNING: WebAssembly optimization is not available for the specified field`);
}
// instantiate Hash object
const wasmOptions2 = buildWasmOptions(optimization); // TODO: use the same options as for AIR
this.hash = merkle_1.createHash(sOptions.hashAlgorithm, wasmOptions2);
if (!this.hash.isOptimized) {
console.warn(`WARNING: WebAssembly optimization is not available for ${sOptions.hashAlgorithm} hash algorithm`);
}
}
else {
this.air = air_script_1.parseScript(source, undefined, { extensionFactor: sOptions.extensionFactor });
this.hash = merkle_1.createHash(sOptions.hashAlgorithm, false);
}
this.indexGenerator = new components_1.QueryIndexGenerator(this.air.extensionFactor, sOptions);
this.ldProver = new components_1.LowDegreeProver(this.air.field, this.indexGenerator, this.hash);
this.serializer = new Serializer_1.Serializer(this.air, this.hash.digestSize);
this.logger = logger || new utils_1.Logger();

@@ -91,10 +110,6 @@ }

// 8 ----- build merkle tree for evaluations of P(x) and S(x)
const hash = merkle_1.getHashFunction(this.hashAlgorithm);
const hashedEvaluations = new Array(evaluationDomainSize);
for (let i = 0; i < evaluationDomainSize; i++) {
let v = this.serializer.mergeValues(pEvaluations, context.sEvaluations, i);
hashedEvaluations[i] = hash(v);
}
const eVectors = [...this.air.field.matrixRowsToVectors(pEvaluations), ...context.sEvaluations];
const hashedEvaluations = this.hash.mergeVectorRows(eVectors);
this.logger.log(label, 'Serialized evaluations of P(x) and S(x) polynomials');
const eTree = merkle_1.MerkleTree.create(hashedEvaluations, this.hashAlgorithm);
const eTree = merkle_1.MerkleTree.create(hashedEvaluations, this.hash);
this.logger.log(label, 'Built evaluation merkle tree');

@@ -116,5 +131,3 @@ // 9 ----- spot check evaluation tree at pseudo-random positions

// 11 ----- Compute low-degree proof
const hashDigestSize = merkle_1.getHashDigestSize(this.hashAlgorithm);
const lEvaluations2 = utils_1.vectorToBuffers(lEvaluations, hashDigestSize);
const lTree = merkle_1.MerkleTree.create(lEvaluations2, this.hashAlgorithm);
const lTree = merkle_1.MerkleTree.create(lEvaluations, this.hash);
this.logger.log(label, 'Built liner combination merkle tree');

@@ -171,3 +184,2 @@ const lcProof = lTree.proveBatch(positions);

const hashedEvaluations = new Array(augmentedPositions.length);
const hash = merkle_1.getHashFunction(this.hashAlgorithm);
for (let i = 0; i < proof.values.length; i++) {

@@ -179,3 +191,3 @@ let mergedEvaluations = proof.values[i];

sEvaluations.set(position, s);
hashedEvaluations[i] = hash(mergedEvaluations);
hashedEvaluations[i] = this.hash.digest(mergedEvaluations);
}

@@ -190,3 +202,3 @@ this.logger.log(label, `Decoded evaluation spot checks`);

};
if (!merkle_1.MerkleTree.verifyBatch(eRoot, augmentedPositions, evProof, this.hashAlgorithm)) {
if (!merkle_1.MerkleTree.verifyBatch(eRoot, augmentedPositions, evProof, this.hash)) {
throw new StarkError_1.StarkError(`Verification of evaluation Merkle proof failed`);

@@ -230,9 +242,8 @@ }

try {
const hashDigestSize = merkle_1.getHashDigestSize(this.hashAlgorithm);
const lcProof = {
values: utils_1.vectorToBuffers(this.air.field.newVectorFrom(lcValues), hashDigestSize),
values: utils_1.bigIntsToBuffers(lcValues, this.air.field.elementSize),
nodes: proof.lcProof.nodes,
depth: proof.lcProof.depth
};
if (!merkle_1.MerkleTree.verifyBatch(proof.lcProof.root, positions, lcProof, this.hashAlgorithm)) {
if (!merkle_1.MerkleTree.verifyBatch(proof.lcProof.root, positions, lcProof, this.hash)) {
throw new StarkError_1.StarkError(`Verification of linear combination Merkle proof failed`);

@@ -254,10 +265,10 @@ }

sizeOf(proof) {
const size = utils_1.sizeOf(proof, this.hashAlgorithm);
const size = utils_1.sizeOf(proof, this.hash.digestSize);
return size.total;
}
serialize(proof) {
return this.serializer.serializeProof(proof, this.hashAlgorithm);
return this.serializer.serializeProof(proof);
}
parse(buffer) {
return this.serializer.parseProof(buffer, this.hashAlgorithm);
return this.serializer.parseProof(buffer);
}

@@ -298,2 +309,18 @@ // HELPER METHODS

}
function buildWasmOptions(options) {
if (typeof options === 'boolean') {
return {
memory: new WebAssembly.Memory({
initial: Math.ceil(DEFAULT_INITIAL_MEMORY / WASM_PAGE_SIZE),
maximum: Math.ceil(DEFAULT_MAXIMUM_MEMORY / WASM_PAGE_SIZE)
})
};
}
else {
const initialMemory = Math.ceil((options.initialMemory || DEFAULT_INITIAL_MEMORY) / WASM_PAGE_SIZE);
const maximumMemory = Math.ceil((options.maximumMemory || DEFAULT_MAXIMUM_MEMORY) / WASM_PAGE_SIZE);
const memory = new WebAssembly.Memory({ initial: initialMemory, maximum: maximumMemory });
return { memory };
}
}
function validateAssertions(trace, assertions) {

@@ -300,0 +327,0 @@ const registers = trace.rowCount;

@@ -20,2 +20,5 @@ "use strict";

exports.inline = inliners;
// CONSTANTS
// ================================================================================================
const MASK_64B = 0xffffffffffffffffn;
// PUBLIC FUNCTIONS

@@ -32,24 +35,25 @@ // ================================================================================================

exports.isPowerOf2 = isPowerOf2;
function vectorToBuffers(values, size) {
function buffersToBigInts(values) {
const result = new Array(values.length);
if (values.elementSize > size) {
throw Error('Cannot convert vector to buffer: vector elements are too large');
}
for (let i = 0; i < values.length; i++) {
let buffer = Buffer.alloc(size);
values.copyValue(i, buffer, 0);
result[i] = buffer;
let buffer = values[i];
result[i] = readBigInt(buffer, 0, buffer.byteLength);
}
return result;
}
exports.vectorToBuffers = vectorToBuffers;
function buffersToBigInts(values) {
exports.buffersToBigInts = buffersToBigInts;
function bigIntsToBuffers(values, size) {
const result = new Array(values.length);
for (let i = 0; i < values.length; i++) {
let buffer = values[i];
result[i] = readBigInt(buffer, 0, buffer.byteLength);
const limbCount = size >> 3;
for (let i = 0; i < result.length; i++) {
let offset = 0, value = values[i], buffer = Buffer.allocUnsafe(size);
for (let limb = 0; limb < limbCount; limb++, offset += 8) {
buffer.writeBigUInt64LE(value & MASK_64B, offset);
value = value >> 64n;
}
result[i] = buffer;
}
return result;
}
exports.buffersToBigInts = buffersToBigInts;
exports.bigIntsToBuffers = bigIntsToBuffers;
function readBigInt(buffer, offset, elementSize) {

@@ -56,0 +60,0 @@ const blocks = elementSize >> 3;

@@ -6,5 +6,5 @@ "use strict";

// ================================================================================================
function writeMerkleProof(buffer, offset, proof, nodeSize) {
function writeMerkleProof(buffer, offset, proof, leafSize) {
offset = writeArray(buffer, offset, proof.values);
offset = writeMatrix(buffer, offset, proof.nodes);
offset = writeMatrix(buffer, offset, proof.nodes, leafSize);
offset = buffer.writeUInt8(proof.depth, offset);

@@ -14,6 +14,6 @@ return offset;

exports.writeMerkleProof = writeMerkleProof;
function readMerkleProof(buffer, offset, nodeSize) {
const valuesInfo = readArray(buffer, offset, nodeSize);
function readMerkleProof(buffer, offset, leafSize, nodeSize) {
const valuesInfo = readArray(buffer, offset, leafSize);
offset = valuesInfo.offset;
const nodesInfo = readMatrix(buffer, offset, nodeSize);
const nodesInfo = readMatrix(buffer, offset, leafSize, nodeSize);
offset = nodesInfo.offset;

@@ -54,8 +54,14 @@ const depth = buffer.readUInt8(offset);

// ================================================================================================
function writeMatrix(buffer, offset, matrix) {
function writeMatrix(buffer, offset, matrix, leafSize) {
// 1 byte for the number of columns (max 256 written as 0)
offset = buffer.writeUInt8(matrix.length === sizeof_1.MAX_ARRAY_LENGTH ? 0 : matrix.length, offset);
// then write lengths of each column (1 byte each, max 255)
// then write lengths and value type of each column (1 byte each, max 255)
for (let i = 0; i < matrix.length; i++) {
offset = buffer.writeUInt8(matrix[i].length, offset);
let column = matrix[i];
let length = column.length;
// column type is stored as least significant bit
let type = (length > 0 && column[0].byteLength === leafSize)
? 1 /* leaf */
: 0 /* node */;
offset = buffer.writeUInt8((length << 1) | type, offset);
}

@@ -72,12 +78,19 @@ // then write the actual values

exports.writeMatrix = writeMatrix;
function readMatrix(buffer, offset, elementSize) {
function readMatrix(buffer, offset, leafSize, nodeSize) {
const columnCount = buffer.readUInt8(offset) || sizeof_1.MAX_ARRAY_LENGTH; // 0 means 256
offset += 1;
const matrix = new Array(columnCount);
const columnTypes = new Array(columnCount);
for (let i = 0; i < columnCount; i++, offset += 1) {
matrix[i] = new Array(buffer.readUInt8(offset));
let lengthAndType = buffer.readUInt8(offset);
matrix[i] = new Array(lengthAndType >>> 1);
columnTypes[i] = lengthAndType & 1;
}
let elementSize;
for (let i = 0; i < columnCount; i++) {
let column = matrix[i];
// set first element type based on column type
let firstElementSize = columnTypes[i] === 1 /* leaf */ ? leafSize : nodeSize;
for (let j = 0; j < column.length; j++) {
elementSize = (j === 0) ? firstElementSize : nodeSize;
column[j] = Buffer.allocUnsafe(elementSize);

@@ -84,0 +97,0 @@ offset += buffer.copy(column[j], 0, offset, offset + elementSize);

"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
const merkle_1 = require("@guildofweavers/merkle");
// MODULE VARIABLES
// ================================================================================================
exports.MAX_ARRAY_LENGTH = 256;
exports.MAX_MATRIX_COLUMN_LENGTH = 127;
// PUBLIC FUNCTIONS
// ================================================================================================
function sizeOf(proof, hashAlgorithm) {
const nodeSize = merkle_1.getHashDigestSize(hashAlgorithm);
function sizeOf(proof, hashDigestSize) {
let size = 0;

@@ -19,9 +18,9 @@ // evData

// evProof
let evProof = nodeSize; // root
evProof += sizeOfMatrix(proof.evProof.nodes, nodeSize);
let evProof = hashDigestSize; // root
evProof += sizeOfMatrix(proof.evProof.nodes);
evProof += 1; // evaluation proof depth
size += evProof;
// lcProof
let lcProof = nodeSize; // root;
lcProof += sizeOfMatrix(proof.lcProof.nodes, nodeSize);
let lcProof = hashDigestSize; // root;
lcProof += sizeOfMatrix(proof.lcProof.nodes);
lcProof += 1; // linear combination proof depth

@@ -33,7 +32,7 @@ size += lcProof;

let component = proof.ldProof.components[i];
ldProof += nodeSize; // column root
ldProof += sizeOfMerkleProof(component.columnProof, nodeSize);
ldProof += sizeOfMerkleProof(component.polyProof, nodeSize);
ldProof += hashDigestSize; // column root
ldProof += sizeOfMerkleProof(component.columnProof);
ldProof += sizeOfMerkleProof(component.polyProof);
}
ldProof += sizeOfArray(proof.ldProof.remainder, nodeSize);
ldProof += sizeOfArray(proof.ldProof.remainder);
size += ldProof;

@@ -43,6 +42,6 @@ return { evData, evProof, lcProof, ldProof, total: size };

exports.sizeOf = sizeOf;
function sizeOfMerkleProof(proof, nodeSize) {
function sizeOfMerkleProof(proof) {
let size = 0;
size += sizeOfArray(proof.values, nodeSize);
size += sizeOfMatrix(proof.nodes, nodeSize);
size += sizeOfArray(proof.values);
size += sizeOfMatrix(proof.nodes);
size += 1; // tree depth

@@ -54,11 +53,16 @@ return size;

// ================================================================================================
function sizeOfArray(array, elementSize) {
if (array.length > exports.MAX_ARRAY_LENGTH) {
function sizeOfArray(array) {
if (array.length === 0) {
throw new Error(`Array cannot be zero-length`);
}
else if (array.length > exports.MAX_ARRAY_LENGTH) {
throw new Error(`Array length (${array.length}) cannot exceed ${exports.MAX_ARRAY_LENGTH}`);
}
let size = 1; // 1 byte for array length
size += array.length * elementSize;
for (let i = 0; i < array.length; i++) {
size += array[i].length;
}
return size;
}
function sizeOfMatrix(matrix, elementSize) {
function sizeOfMatrix(matrix) {
if (matrix.length > exports.MAX_ARRAY_LENGTH) {

@@ -68,9 +72,12 @@ throw new Error(`Matrix column count (${matrix.length}) cannot exceed ${exports.MAX_ARRAY_LENGTH}`);

let size = 1; // 1 byte for number of columns
size += matrix.length; // 1 byte for length of each column
size += matrix.length; // 1 byte for length and type of each column
for (let i = 0; i < matrix.length; i++) {
let columnLength = matrix[i].length;
if (columnLength >= exports.MAX_ARRAY_LENGTH) {
throw new Error(`Matrix column length (${columnLength}) cannot exceed ${exports.MAX_ARRAY_LENGTH - 1}`);
let column = matrix[i];
let columnLength = column.length;
if (columnLength >= exports.MAX_MATRIX_COLUMN_LENGTH) {
throw new Error(`Matrix column length (${columnLength}) cannot exceed ${exports.MAX_MATRIX_COLUMN_LENGTH}`);
}
size += (columnLength * elementSize);
for (let j = 0; j < columnLength; j++) {
size += column[j].length;
}
}

@@ -77,0 +84,0 @@ return size;

{
"name": "@guildofweavers/genstark",
"version": "0.5.2",
"version": "0.5.3",
"description": "zk-STARK generation library",

@@ -32,4 +32,4 @@ "main": "index.js",

"@guildofweavers/galois": "0.4.x",
"@guildofweavers/merkle": "0.2.x"
"@guildofweavers/merkle": "0.3.x"
}
}

@@ -56,31 +56,31 @@ # genSTARK

Starting STARK computation
Set up evaluation context in 6 ms
Generated execution trace in 41 ms
Set up evaluation context in 7 ms
Generated execution trace in 39 ms
Converted execution trace into polynomials and low-degree extended them in 47 ms
Computed Q(x) polynomials in 254 ms
Computed Z(x) polynomial in 3 ms
Computed Z(x) inverses in 13 ms
Computed D(x) polynomials in 4 ms
Computed B(x) polynomials in 46 ms
Serialized evaluations of P(x) and S(x) polynomials in 152 ms
Built evaluation merkle tree in 75 ms
Computed 80 evaluation spot checks in 3 ms
Computed random linear combination of evaluations in 34 ms
Built liner combination merkle tree in 116 ms
Computed low-degree proof in 149 ms
STARK computed in 949 ms
Computed Z(x) inverses in 14 ms
Computed D(x) polynomials in 5 ms
Computed B(x) polynomials in 45 ms
Serialized evaluations of P(x) and S(x) polynomials in 41 ms
Built evaluation merkle tree in 48 ms
Computed 80 evaluation spot checks in 5 ms
Computed random linear combination of evaluations in 31 ms
Built liner combination merkle tree in 46 ms
Computed low-degree proof in 118 ms
STARK computed in 707 ms
--------------------
Proof serialized in 8 ms; size: 211.99 KB
Proof serialized in 10 ms; size: 181.3 KB
--------------------
Proof parsed in 5 ms
Proof parsed in 9 ms
--------------------
Starting STARK verification
Set up evaluation context in 2 ms
Set up evaluation context in 1 ms
Computed positions for evaluation spot checks in 2 ms
Decoded evaluation spot checks in 1 ms
Decoded evaluation spot checks in 2 ms
Verified evaluation merkle proof in 5 ms
Verified low-degree proof in 46 ms
Verified transition and boundary constraints in 16 ms
Verified liner combination merkle proof in 2 ms
STARK verified in 78 ms
Verified low-degree proof in 36 ms
Verified transition and boundary constraints in 17 ms
Verified liner combination merkle proof in 4 ms
STARK verified in 69 ms
--------------------

@@ -106,5 +106,7 @@ ```

| security? | An optional property specifying [security parameters](#Security-options) for the STARK. |
| optimization? | An optional property specifying [WASM optimization parameters](#Optimization-options) for the STARK. Set this to `null` to turn WASM-optimization off. If you provide this parameter to STARKS over fields that don't support WASM-optimization, an error will be thrown.|
| optimization? | An optional property specifying [WASM optimization parameters](#Optimization-options) for the STARK. You can also set this to `true` to turn on WASM optimization with default parameters. |
| logger? | An optional logger. The default logger prints output to the console, but it can be replaced with anything that complies with the Logger interface. |
**Note:** WASM-optimization is available for certain [finite fields](https://github.com/GuildOfWeavers/galois#wasm-optimization) and [hash functions](https://github.com/GuildOfWeavers/merkle#hash). If the field or the hash function you are using does not support WASM-optimization, a warning will be printed and its JavaScript equivalents will be used. In general, WASM optimization can speed up STARK proof time by 2x - 5x.
### Security options

@@ -118,6 +120,4 @@ Security options parameter should have the following form:

| friQueryCount? | Number of queries of the columns of low degree proof to include into the proof. This property is optional; the default is 40; the max is 64. |
| hashAlgorithm? | Hash algorithm to use when building Merkle trees for the proof. Currently, can be one of the following values: `sha256`, `blake2s256`, or `wasmBlake2s256`. This property is optional; the default is `sha256`. |
| hashAlgorithm? | Hash algorithm to use when building Merkle trees for the proof. Currently, can be one of the following values: `sha256`, `blake2s256`. This property is optional; the default is `sha256`. |
**Note:** `wasmBlake2s256` is an experimental implementation of Blake2s algorithm in WebAssembly. On small inputs, it performs about 3x better than Node's native Blake2s implementation (and about 3.5x better than SHA256).
### Optimization options

@@ -128,4 +128,4 @@ Optimization options parameter should have the following form:

| ------------------ | ----------- |
| initialMemory? | Initial number of bytes to allocate for WASM optimization. |
| maximumMemory? | Maximum number of bytes to allocate for WASM optimization. |
| initialMemory? | Initial number of bytes to allocate for WASM optimization; the default is 32 MB. |
| maximumMemory? | Maximum number of bytes to allocate for WASM optimization; the default is 2 GB. |

@@ -215,12 +215,12 @@ ## Generating proofs

| ------------------- | :--------: | :----: | :-------: | :------------: | :--------: | :--------: |
| MiMC | 128 bits | 3 | 1 | 2<sup>13</sup> | 1 sec | 212 KB |
| MiMC | 128 bits | 3 | 1 | 2<sup>17</sup> | 13.5 sec | 377 KB |
| MiMC* | 128 bits | 3 | 1 | 2<sup>13</sup> | 750 ms | 181 KB |
| MiMC* | 128 bits | 3 | 1 | 2<sup>17</sup> | 11 sec | 333 KB |
| MiMC | 256 bits | 3 | 1 | 2<sup>13</sup> | 3.8 sec | 213 KB |
| MiMC | 256 bits | 3 | 1 | 2<sup>17</sup> | 64 sec | 384 KB |
| Merkle Proof (d=8) | 128 bits | 4 | 8 | 2<sup>8</sup> | 650 ms | 88 KB |
| Merkle Proof (d=16) | 128 bits | 4 | 8 | 2<sup>9</sup> | 1.3 sec | 111 KB |
| Merkle Proof (d=8)* | 128 bits | 4 | 8 | 2<sup>8</sup> | 600 ms | 75 KB |
| Merkle Proof (d=16)* | 128 bits | 4 | 8 | 2<sup>9</sup> | 1.2 sec | 96 KB |
Merkle proofs are based on a modified version of [Rescue](/examples/rescue) hash function, and in addition to 8 state registers require 1 public input register and 1 secret input register.
At this point, 128-bit fields are able to take advantage of WASM-based optimization which makes arithmetic operations significantly faster (e.g. over 3x improvement over native JavaScript counterparts). Nevertheless, there is still opportunity to improve proof time by at least 10x by moving more operations out of JavaScript and leveraging SIMD and multithreading.
**\*** Takes advantage of WebAssembly optimization.

@@ -227,0 +227,0 @@ # References

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc