Security News
New Python Packaging Proposal Aims to Solve Phantom Dependency Problem with SBOMs
PEP 770 proposes adding SBOM support to Python packages to improve transparency and catch hidden non-Python dependencies that security tools often miss.
sparql-engine
Advanced tools
An open-source framework for building SPARQL query engines in Javascript.
Main features:
:warning: In Development :warning:
npm install --save sparql-engine
The sparql-engine
framework allow you to build a custom SPARQL query engine on top of any data storage system.
In short, to support SPARQL queries on top of your data storage system, you need to:
Graph
, which provides access to the data storage system.Dataset
(using your own implementation or the default one).PlanBuilder
and use it to execute SPARQL queries.As a starting point, we provide you with two examples of integration:
The first thing to do is to implement a subclass of the Graph
abstract class. A Graph
represents an RDF Graph and is responsible for inserting, deleting and searching for RDF triples in the database.
The main method to implement is Graph.find(triple)
, which is used by the framework to find RDF triples matching
a triple pattern in the RDF Graph.
This method must return an AsyncIterator, which will be consumed to find matching RDF triples. You can find an example of such implementation in the N3 example.
Similarly, to support the SPARQL UPDATE protocol, you have to provides a graph that implements the Graph.insert(triple)
and Graph.delete(triple)
methods, which insert and delete RDF triple from the graph, respectively. These methods must returns Promises, which are fulfilled when the insertion/deletion operation is completed.
Finally, the sparql-engine
framework also let your customize how Basic graph patterns (BGPs) are evaluated against
the RDF graph. By default, the engine provides a default implementation based on the Graph.find
method and the
Index Nested Loop Join algorithm. However, if you wish to supply your own implementation
for BGP evaluation, you just have to provides a graph with an evalBGP(triples)
method.
This method must return an AsyncIterator,
like the Graph.find
method. Tou can find an example of such implementation in the LevelGraph example.
const { Graph } = require('sparql-engine')
class CustomGraph extends Graph {
/**
* Returns an iterator that finds RDF triples matching a triple pattern in the graph.
* @param {Object} triple - Triple pattern to find
* @param {string} triple.subject - Triple pattern's subject
* @param {string} triple.predicate - Triple pattern's predicate
* @param {string} triple.object - Triple pattern's object
* @return {AsyncIterator} An iterator which finds RDF triples matching a triple pattern
*/
find (triple, options) { /* ... */ }
/**
* Insert a RDF triple into the RDF Graph
* @param {Object} triple - RDF Triple to insert
* @param {string} triple.subject - RDF triple's subject
* @param {string} triple.predicate - RDF triple's predicate
* @param {string} triple.object - RDF triple's object
* @return {Promise} A Promise fulfilled when the insertion has been completed
*/
insert (triple) { /* ... */ }
/**
* Delete a RDF triple from the RDF Graph
* @param {Object} triple - RDF Triple to delete
* @param {string} triple.subject - RDF triple's subject
* @param {string} triple.predicate - RDF triple's predicate
* @param {string} triple.object - RDF triple's object
* @return {Promise} A Promise fulfilled when the deletion has been completed
*/
delete (triple) { /* ... */ }
}
Once you have your subclass of Graph
ready, you need to build a collection of RDF Graphs, called a RDF Dataset. A default implementation, HashMapDataset
, is made available by the framework, but you can build your own by subclassing Dataset
.
const { HashMapDataset } = require('sparql-engine')
const CustomGraph = // import your Graph subclass
const GRAPH_A_IRI = 'http://example.org#graph-a'
const GRAPH_B_IRI = 'http://example.org#graph-b'
const graph_a = new CustomGraph(/* ... */)
const graph_b = new CustomGraph(/* ... */)
// we set graph_a as the Default RDF dataset
const dataset = new HashMapDataset(GRAPH_A_IRI, graph_a)
// insert graph_b as a Named Graph
dataset.addNamedGraph(GRAPH_B_IRI, graph_b)
Finally, to run a SPARQL query on your RDF dataset, you need to use the PlanBuilder
class. It is responsible for parsing SPARQL queries and building a pipeline of iterators to evaluate them.
const { PlanBuilder } = require('sparql-engine')
// Get the name of all people in the Default Graph
const query = `
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?name
WHERE {
?s a foaf:Person .
?s rdfs:label ?label .
}`
// Creates a plan builder for the RDF dataset
const builder = new PlanBuilder(dataset)
// Get an iterator to evaluate the query
const iterator = builder.build(query)
// Read results
iterator.on('data', b => console.log(b))
iterator.on('error', err => console.error(err))
iterator.on('end', () => {
console.log('Query evaluation complete!');
})
The sparql-engine
framework provides support for evaluating federated SPARQL queries, using the SERVICE keyword.
As with a Graph
, you simply need to provides an implementation of a ServiceExecutor
,
a class used as a building block by the engine to evaluates SERVICE clauses.
The only method that needs to be implemented is the ServiceExecutor._execute
method,
as detailed below.
class ServiceExecutor {
/**
* Constructor
* @param {PlanBuilder} builder - PlanBuilder instance
*/
constructor (builder) {}
/**
* Returns an iterator used to evaluate a SERVICE clause
* @param {AsyncIterator} source - Source iterator
* @param {string} iri - Iri of the SERVICE clause
* @param {Object} subquery - Subquery to be evaluated
* @param {Object} options - Execution options
* @return {AsyncIterator} An iterator used to evaluate a SERVICE clause
*/
_execute (source, iri, subquery, options) { /* ... */}
}
Once your custom ServiceExecutor is ready, you need to install it on a PlanBuilder
instance.
const ServiceExecutor = require('sparql-engine').executors.ServiceExecutor
// Suppose a custom ServiceExecutor
class CustomServiceExecutor extends ServiceExecutor { /* ... */ }
const builder = new PlanBuilder()
builder.serviceExecutor = new CustomServiceExecutor(builder)
// Then, use the builder as usual to evaluate Federated SPARQL queries
const iterator = builder.build(/* ... */)
// ...
As introduced before, a PlanBuilder
rely on Executors to build the physical query execution plan
of a SPARQL query. If you wish to configure how this plan is built, then you just have to extends the various executors
available. The following table gives you all informations needed about the available executors.
Executors
Base class | Used to handle | PlanBuilder setter |
---|---|---|
BGPExecutor | Basic Graph Patterns | builder.bgpExecutor = ... |
GraphExecutor | SPARQL GRAPH | builder.graphExecutor = ... |
ServiceExecutor | SPARQL Service | builder.serviceExecutor = ... |
AggregateExecutor | SPARQL Aggregates | builder.aggregateExecutor = ... |
UpdateExecutor | SPARQL UPDATE protocol | builder.updateExecutor = ... |
The following example show you how to install your custom executors on a PlanBuilder
instance.
const BGPExecutor = require('sparql-engine').executors.BGPExecutor
// Suppose a custom BGPExecutor
class CustomBGPExecutor extends BGPExecutor { /* ... */ }
const builder = new PlanBuilder()
builder.bgpExecutor = new CustomBGPExecutor()
// Then, use the builder as usual to evaluate SPARQL queries
const iterator = builder.build(/* ... */)
// ...
To generate the documentation:
git clone https://github.com/Callidon/sparql-engine.git
cd sparql-engine
npm install
npm run doc
FAQs
A framework for building SPARQL query engines in Javascript
The npm package sparql-engine receives a total of 22 weekly downloads. As such, sparql-engine popularity was classified as not popular.
We found that sparql-engine demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PEP 770 proposes adding SBOM support to Python packages to improve transparency and catch hidden non-Python dependencies that security tools often miss.
Security News
Socket CEO Feross Aboukhadijeh discusses open source security challenges, including zero-day attacks and supply chain risks, on the Cyber Security Council podcast.
Security News
Research
Socket researchers uncover how threat actors weaponize Out-of-Band Application Security Testing (OAST) techniques across the npm, PyPI, and RubyGems ecosystems to exfiltrate sensitive data.