fast-base64
Advanced tools
Comparing version 0.1.5 to 0.1.6
{ | ||
"name": "fast-base64", | ||
"description": "Fastest possible base64 encoding/decoding using WebAssembly", | ||
"version": "0.1.5", | ||
"version": "0.1.6", | ||
"author": "gregor <gregor.mitscha-baude@gmx.at>", | ||
@@ -57,3 +57,3 @@ "license": "MIT", | ||
"scripts": { | ||
"build": "mkdir -p dist && node build-lib.js", | ||
"build": "node build-lib.js", | ||
"test": "npx chrode test-base64.js" | ||
@@ -65,4 +65,5 @@ }, | ||
"binaryen": "^101.0.0", | ||
"chrode": "^0.2.7", | ||
"chrode": "^0.2.8", | ||
"esbuild": "^0.12.15", | ||
"esbuild-plugin-inline-worker": "^0.1.0", | ||
"esbuild-plugin-wat": "^0.1.1", | ||
@@ -69,0 +70,0 @@ "eslint": "^7.31.0", |
@@ -63,3 +63,3 @@ # fast-base64 | ||
As far as I can tell, the added overhead of slicing up the input, messaging to the workers and back, and recombining the results is bigger than the gains in performing the actual calculation. Base64 in Wasm is simply already faster than some Browser-native functions that are involved, like `postMessage()`, `TextEncoder.encode()` and `Uint8Array.splice()`. | ||
As far as I can tell, the added overhead of slicing up the input, messaging to the workers and back, and recombining the results is bigger than the gains in performing the actual calculation. | ||
@@ -66,0 +66,0 @@ ## Curious about Base64? |
32381
9