![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
ddeep-core
Advanced tools
Decentralized real-time peer-to-peer data network
This is an alpha version !
Decentralized real-time peer-to-peer data core, used to save and sync decentralized graph data across connected peers, with features like:
Persistent data storage with recovery checkpoints
Scoped and AI-powered data policies
Real-time low latency connections
Security features like IP whitelists, data policies, smart listeners so one device can't overload the core with connections
Command-line interactive interface to manage your network
ddeep-core
works perfectly with Gun JS.
We recommend you clone this repository and run npm install
and then npm start
to start ddeep-core.
Using npm, you can install ddeep-core globally:
npm install -g ddeep-core
Now create a directory for your project and run:
ddeep-init
This will give you a complete ready-to-go environment, you can run npm start
to start your network.
To start your network using node, just run:
npm start
or using bytenode:
node start-bytenode
You can run ddeep-code in a Docker container, you can pull the image from Docker hub:
docker pull multineon/ddeep-core
or build it from the source (recommended) :
docker build -t ddeep-core .
and now you can run ddeep-core as a docker container:
docker run -d ddeep-core
To build your core again (needed after updating configurations, policies, and extensions):
npm run build
or using bytenode:
npm run bytenode-build
or simply do this:
npm run build-start
to build your new configurations and start the server.
Currently, everytime you make a change on your configurations, policies, extensions, or code, you need to build ddeep-core again, thanks to esbuild the build will usually be ready in under one second.
in coming versions this won't be the case and you won't need to build the code after every change.
This project is still in alpha testing
and in its early stages, so we are testing possible issues and bugs, if you face any issue please contact us or report it on Github.
in the root directory of your project, you'll find a config file called ddeep.config.js
where all your configurations live, the default file content should look like this:
You need to build the code using
npm run build
after everytime you update your configurations.
in the ddeep.config.js
you'll find comments explaining what every option does, and this is an example of the all default options:
module.exports = {
"storage": false,
"port": 9999,
"whitelist": [],
"hf": null,
"checkpoint": null,
"reset_graph": null,
"reset_listeners": 6000000
}
When you start ddeepc-core
it opens a command-line interface where you can manage your network, let's see available commands:
list peers
: lists all connected peers
list listeners
: lists graph listeners and the peers listening to them
peer PEER_ID
: shows a peer info
clear
: clears the terminal
clear peers
: clears all listening peers
clear graph
: clears the cached graph data
clear listeners
: clears all the listeners
info
: shows the configurations info
run CODE
: executes a nodeJS code inside the code's process
You can add policies to the policies.config.js
file in the root directory of your project.
You need to build the code using
npm run build
after everytime you configure your policies.
let's first discover a policy schema in Ddeep:
POLICY(
type: 'check'|'smart',
operations: ['get', 'put'],
graph: string,
callback: Function // return true or false
)
There are two types of policies, check policies and smart policies, so let's discover how every policy works.
the graph
property accept a string of nodes the policies is applied to. if you apply a policy to people
it's applied to all nodes under people
, but if you apply a policy to people/kais
the policy will only be applied to the node kais
under people
, and so on.
Check policies are based on the check function, if the function returns true
the access to the data will be granted and if it returns false
the data access will be denied.
let's see a simple example:
module.exports = [
POLICY(
'check', ['put'], 'people/kais',
(data) => {
return (data.name) ? true : false;
}
)
]
this policy will be applied to put
operations to the node kais
under people
and it checks if the data we are putting have a name
or not, if it does, the data operation will be granted and the data will be added otherwise the operation will be cancelled.
the data
argument passed to the checking function contains the data being putted if the operation is put
, and the data is being getted
if the operation is get
.
what matters is that the checking function has to return true
or false
, if returned true
the opeartion will be processed, and if returned false
the opeartion will be ignored.
for example this is also a valid check()
policy function:
(data) => {
if (data.plan === "pro") {return true};
if (data.plan !== "pro") {return false};
}
you have full customizability to build your own check functions and policies.
WARNING: the text classification model is giving poor classes scores and is not acurate, we are currently working on a fix for this
Smart policies uses AI classification to classify the inputs and gives an object of classes with a score from 0.0 to 1.0 for every class or emotion as 1.0 is the highest score.
First of all we recommend you add your HuggingFace token to your ddeep.config.js
so you don't suffer from hard rate limits.
You can check if a class is more than a certain value or less than a certain value, It's super easy let's see the example below:
module.exports = [
POLICY(
'smart', ['get', 'put'], 'posts',
(classes) => {
var smart_check = extensions.load('smart_check');
return smart_check(classes, [
[ "anger", "<0.5", true ],
[ "anger", ">0.5", false ]
]);
}
)
]
the policy above is applied to all nodes under posts
and it blocks all data that contains angry inputs from being added or read.
smart_check
extensionwith smart policies you need to use smart_check
extension to check the classes and return true
or false
.
the extension can be loaded using extensions.load
and it's imported to your policies by default, this is how smart_check
is used:
var smartCheck = extensions.load('smart_check');
return smartCheck(classes, [
[class: string, condition: string, return: true|false]
])
Classes: passed to the policy's function if the policy type is set to smart
instead of the data in check policies
.
Class: have to be a valid class name.
Condition: a string that starts with an operator and then a value to check if the class value apply to the condition. valid opeartors:
>
the class value is greater than the given value. example: ">0.3"
.
<
the class value is less than the given value. example: "<0.7"
.
Return: if the condition is applied, the check will return the value if the return
.
You can use extensions to expand the functionality of ddeep-core easily with full control, real-time listeners, and more...
you add your extensions to the extensions.config.js
found in the root directory of your project.
You need to build the code using
npm run build
after everytime you configure your extensions.
{
"name": string,
"callback": Function
}
module.exports = [
{
name: 'object-keys',
callback: (obj) => {
return (typeof obj === 'object') ? Object.keys(obj) : null;
}
}
]
This is just a very simple extension that returns the keys in a data object. this extension might not be useful but it's just an example to show you how to write your own extensions.
Now you can use your extension in your policies or any other file using extensions.load(extension_name)
. example:
var get_object_keys = extensions.load('object-keys');
extensions
is imported by default to your policies but if you want to use your extension in other files, you can require it:
var extensions = require('./lib/ext/require');
in the ddeep.config.js
you can add a list of IP adresses (of peers, servers, or websites) that are able to connect to your core or server.
this can help you prevent cross-site connections to your core, if the list in empty this option will be ignored.
If you are using persistent storage, you can setup a checkpoint in the ddeep.config.js
so the system will create a restore checkpoint based on the options you give it. (more explained in the ddeep.config.js
file itself).
Now to load data from a restore point, you need to run this:
node ./dev/storage/recover.js -p POINT_ID
you can check the /recover
directory to see all available checkpoints and pick a point to load your data from, use the checkpoint directory name as the POINT_ID.
ddeep-core uses fastify to run a websocket server as it's a very efficient WebSocket framework that can handle tens of thousands of requests per second.
if you think that you can upgrade the communications structure, jump to development.
ddeep-core uses conflict resolution algorithm (HAM), now this is really fully implemented into ddeep-core from gun... so we recommend you check this page for more info.
ddeep-core uses radix to handle the persistent storage functionality, if storage: true
.
There is notthing really fancy in check policies, It's just true | false
callbacks.
Currently we are using the SamLowe/roberta-base-go_emotions model through HuggingFace inference.
We are working to upgrade this to a local running classification model running locally in the server itself for lower latency and more stability.
This project is part of a big movement to build a decentralized world where developers own their projects and users own their data, and ddeep-core is the core of this world.
Based on some simple benchmarks, ddeep-core can perform ~200K ops/sec on a low-end device of 2GB-4GB of ram, and we are always working to get better performance and would be happy to hear your experience with it on Matrix.
dev
directoryall the code lives in the /dev
directory, and you can run npm run build
to build your code to /dist/build.js
, we use esbuild as it's the fastest tool we've ever used to build nodeJS code.
If you want to develop this project, distribute it or help us improve it, you're welcome to do that, just check the license and you're good to go.
Some of the files where taken from gun-port, and there is a license notice in the first 3 lines of these files with a notice if they were modified or not. we recommend you check gun's license before using these files in a distributed version.
We want to give our thanks to all the wonderful people helping us to decentralize the world, and also to:
Mark Nadal for building the best decentralized graph engine ever.
esbuild for building the fastest bundler in the world.
fastify for building a great fast web framework for nodeJS.
bytenode for building a great bytecode compiler for NodeJS.
ddeep-core is a complete back-end NodeJS environment to run decentralized real-time databases, peers, and relays.
ddeep-core gives you the full control to configure it, scale it, change it, or do whatever you want... it's yours.
Ddeep core is part of ddeep ecosystem, a decentralized open-source ecosystem of tools for developers to build stable decentralized projects.
ddeep-core works fine with Gun as a peer, and soon we will release the complete ddeep ecosystem so you get a great API to use with ddeep-core.
the idea of this project was inspired by Gun, so we took the idea to the next level, the goal is to give developers a secure & stable way to build decentralized projects so we added policies, extensions, upgraded the connections protocols, added more storage configurations and automations for restore checkpoints, and much more...
If you need any help, have any ideas, or want to code something together, you can always send us a message on Matrix.
Built with ❤️ by Kais Radwan.
FAQs
Unknown package
The npm package ddeep-core receives a total of 6 weekly downloads. As such, ddeep-core popularity was classified as not popular.
We found that ddeep-core demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.