Security News
New Python Packaging Proposal Aims to Solve Phantom Dependency Problem with SBOMs
PEP 770 proposes adding SBOM support to Python packages to improve transparency and catch hidden non-Python dependencies that security tools oft miss.
bulksearch
Advanced tools
When it comes to the overall speed, BulkSearch outperforms every searching library out there and also provides flexible search capabilities like multi-word, phonetics or partial matching. It is basically based on how a hdd manage their filesystem. Adding, updating or removing items are also fast as searching for them, but also requires some additional amount of memory. When your index don't needs to be updated continuously then FlexSearch may be a better choice. BulkSearch also provides you a asynchronous processing model to perform any updates on the index in background.
Benchmark: https://jsperf.com/bulksearch
All Features:
<html>
<head>
<script src="js/bulksearch.min.js"></script>
</head>
...
Note: Use bulksearch.min.js for production and bulksearch.js for development.
Use latest from CDN:
<script src="https://cdn.rawgit.com/nextapps-de/bulksearch/master/bulksearch.min.js"></script>
npm install bulksearch
In your code include as follows:
var BulkSearch = require("bulksearch");
Or pass in options when requiring:
var index = require("bulksearch").create({/* options */});
AMD
var BulkSearch = require("./bulksearch.js");
Global methods:
Index methods:
BulkSearch.create(<options>)
var index = new BulkSearch();
alternatively you can also use:
var index = BulkSearch.create();
var index = new BulkSearch({
// default values:
type: "integer",
encode: "icase",
boolean: "and",
size: 4000,
multi: false,
strict: false,
ordered: false,
paging: false,
async: false,
cache: false
});
Read more: Phonetic Search, Phonetic Comparison, Improve Memory Usage
Index.add(id, string)
index.add(10025, "John Doe");
Index.search(string|options, <limit|page>, <callback>)
index.search("John");
Limit the result:
index.search("John", 10);
Perform queries asynchronously:
index.search("John", function(result){
// array of results
});
index.search({
query: "John",
page: '1:1234',
limit: 10,
callback: function(result){
// async
}
});
Index.update(id, string)
index.update(10025, "Road Runner");
Index.remove(id)
index.remove(10025);
index.reset();
index.destroy();
Index.init(<options>)
Note: Re-initialization will also destroy the old index!
Initialize (with same options):
index.init();
Initialize with new options:
index.init({
/* options */
});
BulkSearch.addMatcher({KEY: VALUE})
Add global matchers for all instances:
BulkSearch.addMatcher({
'ä': 'a', // replaces all 'ä' to 'a'
'ó': 'o',
'û': 'u'
});
Add private matchers for a specific instance:
index.addMatcher({
'ä': 'a', // replaces all 'ä' to 'a'
'ó': 'o',
'û': 'u'
});
Define a private custom encoder during creation/initialization:
var index = new BulkSearch({
encode: function(str){
// do something with str ...
return str;
}
});
BulkSearch.register(name, encoder)
BulkSearch.register('whitespace', function(str){
return str.replace(/ /g, '');
});
Use global encoders:
var index = new BulkSearch({ encode: 'whitespace' });
Private encoder:
var encoded = index.encode("sample text");
var encoded = BulkSearch.encode("whitespace", "sample text");
BulkSearch.register('mixed', function(str){
str = this.encode("icase", str); // built-in
str = this.encode("whitespace", str); // custom
return str;
});
BulkSearch.register('extended', function(str){
str = this.encode("custom", str);
// do something additional with str ...
return str;
});
index.info();
Returns information about the index, e.g.:
{
"bytes": 103600,
"chunks": 9,
"fragmentation": 0, // in %
"fragments": 0,
"id": 0,
"length": 7798,
"matchers": 0,
"size": 10000,
"status": false
}
Note: When the fragmentation value is about 50% or higher, your should consider using cleanup().
Optimize an index will free all fragmented memory and also rebuilds the index by scoring.
index.optimize();
Note: Pagination can simply reduce query time by a factor of 100. The pagination isn't yet bi-directional, it is only possible to page forward.
Enable pagination on initialization:
var index = BulkSearch.create({ paging: true });
Perform query and pass a limit (items per page):
index.search("John", 10);
The response will include a pagination object like this:
{
"page": "0:0",
"next": "1:16322",
"results": []
}
Explanation:
"page" | Includes the pointer to the current page. |
"next" | Includes the pointer to the next page. Whenever this field has the value null there are no more pages left. |
"results" | Array of matched items. |
index.search("John", {
page: "1:16322",
limit: 10
});
Option | Values | Description |
---|---|---|
type |
"byte" "short" "integer" "float" "string" | The data type of passed IDs has to be specified on creation. It is recommended to uses to most lowest possible data range here, e.g. use "short" when IDs are not higher than 65,535. |
encode |
false "icase" "simple" "advanced" "extra" function(string):string | The encoding type. Choose one of the built-ins or pass a custom encoding function. |
boolean |
"and" "or" | The applied boolean model when comparing multiple words. Note: When using "or" the first word is also compared with "and". Example: a query with 3 words, results has either: matched word 1 & 2 and matched word 1 & 3. |
size | 2500 - 10000 | The size of chunks. It depends on content length which value fits best. Short content length (e.g. User names) are faster with a chunk size of 2,500. Bigger text runs faster with a chunk size of 10,000. Note: It is recommended to use a minimum chunk size of the maximum content length which has to be indexed to prevent fragmentation. |
depth | 0 - 6 | Set the depth of register. It is recommended to use a value in relation to the number of stored index and content length for an optimum performance-memory value. Note: Increase this options carefully! |
multi |
true false | Enable multi word processing. |
ordered |
true false | Multiple words has to be the same order as the matched entry. |
strict |
true false | Matches exactly needs to be started with the query. |
cache |
true false | Enable caching. |
Encoder | Description | Example | False Positives | Compression Level |
---|---|---|---|---|
false | Turn off encoding |
Reference: "Björn-Phillipp Mayer" Matches: "Phil" | no | no |
"icase" | Case in-sensitive encoding |
Reference: "Björn-Phillipp Mayer" Matches: "phil" | no | no |
"simple" | Phonetic normalizations |
Reference: "Björn-Phillipp Mayer" Matches: "bjoern fillip" | no | ~ 3% |
"advanced" | Phonetic normalizations + Literal transformations |
Reference: "Björn-Phillipp Mayer" Matches: "filip meier" | no | ~ 25% |
"extra" | Phonetic normalizations + Soundex transformations |
Reference: "Björn-Phillipp Mayer" Matches: "byorn mair" | yes | ~ 50% |
Reference String: "Björn-Phillipp Mayer"
Query | ElasticSearch | BulkSearch (iCase) | BulkSearch (Simple) | BulkSearch (Adv.) | BulkSearch (Extra) |
---|---|---|---|---|---|
björn | yes | yes | yes | yes | yes |
björ | no | yes | yes | yes | yes |
bjorn | no | no | yes | yes | yes |
bjoern | no | no | no | yes | yes |
philipp | no | no | no | yes | yes |
filip | no | no | no | yes | yes |
björnphillip | no | no | yes | yes | yes |
meier | no | no | no | yes | yes |
björn meier | no | no | no | yes | yes |
meier fhilip | no | no | no | yes | yes |
byorn mair | no | no | no | no | yes |
(false positives) | yes | no | no | no | yes |
Note: The data type of passed IDs has to be specified on creation. It is recommended to uses the most lowest possible data range here, e.g. use "short" when IDs are not higher than 65,535.
ID Type | Range of Values | Memory usage of every ~ 100,000 indexed words |
---|---|---|
Byte | 0 - 255 | 4.5 Mb |
Short | 0 - 65,535 | 5.3 Mb |
Integer | 0 - 4,294,967,295 | 6.8 Mb |
Float | 0 - * (16 digits) | 10 Mb |
String | * (unlimited) | 28.2 Mb |
Author BulkSearch: Thomas Wilkerling
License: Apache 2.0 License
FAQs
Superfast, lightweight and read-write optimized full text search library.
The npm package bulksearch receives a total of 11 weekly downloads. As such, bulksearch popularity was classified as not popular.
We found that bulksearch demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PEP 770 proposes adding SBOM support to Python packages to improve transparency and catch hidden non-Python dependencies that security tools oft miss.
Security News
Socket CEO Feross Aboukhadijeh discusses open source security challenges, including zero-day attacks and supply chain risks, on the Cyber Security Council podcast.
Security News
Research
Socket researchers uncover how threat actors weaponize Out-of-Band Application Security Testing (OAST) techniques across the npm, PyPI, and RubyGems ecosystems to exfiltrate sensitive data.