
Security News
Insecure Agents Podcast: Certified Patches, Supply Chain Security, and AI Agents
Socket CEO Feross Aboukhadijeh joins Insecure Agents to discuss CVE remediation and why supply chain attacks require a different security approach.
federated-learning-server
Advanced tools
Experimental server library for federated learning in Tensorflow.js
This library sets up a simple socket.io-based server for transmitting and receiving TensorflowJS model weights.
import * as http from 'http';
import * as federated from 'federated-learning-server';
const INIT_MODEL = 'file:///initial/model.json';
const httpServer = http.createServer();
const fedServer = new federated.Server(httpServer, INIT_MODEL);
fedServer.setup().then(() => {
httpServer.listen(8080);
});
new federated.Server(httpServer, tfModel); // Initialize a federated server from an in-memory tf.Model
new federated.Server(httpServer, 'https://remote.server/tf-model.json'); // or from a URL pointing to one
new federated.Server(httpServer, 'file:///my/local/file/tf-model.json'); // (which can be a file URL in Node)
new federated.Server(httpServer, async () => { // or from an asynchrous function returning one
const model = await tf.loadModel('file:///transfer/learning/model.json');
model.layers[0].trainable = false;
return model;
});
new federated.Server(httpServer, federatedServerModel); // if you need fully custom behavior; see below
The simplest way to set up a federated.Server is to pass a tf.Model. However, you can also pass a string that will be delegated to tf.loadModel (both https?:// and file:// URLs should work), or an asynchronous function that will return a tf.Model. The final option is to define your own FederatedServerModel, which has to implement various saving and loading methods. See its documentation for more details.
Note that by default, different tf.Model versions will be saved as files in subfolders of ${process.cwd()}/saved-models/. If you would like to change this directory, you can pass a modelDir configuration parameter, e.g. federated.Server(httpServer, model, { modelDir: '/mnt/my-vfs' }).
If you would like to skip the persistence layer, you can instead import FederatedServerInMemoryModel which will update a single model in memory. Furthermore, if you want a version of this library which omits socket.io in favor of a mocked-out version that works in the browser, check out the mock server library.
new federated.Server(httpServer, model, {
// These are true server parameters
serverHyperparams: {
aggregation: 'mean', // how to merge weights (only mean supported now)
minUpdatesPerVersion: 20, // server merges every 20 client weight updates
},
// These get broadcast to clients
clientHyperparams: {
learningRate: 0.01, // client takes SGD steps of size 0.01
epochs: 5, // client takes 5 SGD steps per weight update
examplesPerUpdate: 10, // client computes weight updates every 10 examples
batchSize: 5, // client subdivides `examplesPerUpdate` into batches
noiseStddev: 0.001 // client adds N(0, 0.001) noise to their updates
},
verbose: false, // whether to print debugging/timing information
modelDir: '/mnt/my-vfs', // server stores tf.Model-specific versions here
modelCompileConfig: { // tf.Model-specific compile config
loss: 'categoricalCrossEntropy',
metrics: ['accuracy']
}
})
Many of these hyperparameters matter a great deal for the efficiency and privacy of learning, but the correct settings depend greatly on the nature of the data, the size of the model being trained, and how consistently the data is distributed across clients. In the future, we hope to support automated (and dynamic) tuning of these hyperparameters.
You can add an event listener that fires each time a client uploads a new set of weights (and optionally, self-reported metrics of how well the model performed on the examples used in training):
fedServer.onUpload(message => {
console.log(message.model.version); // version of the model
console.log(message.model.vars); // serialized model variables
console.log(message.clientId); // self-reported and usually random client ID
console.log(message.metrics); // array of performance metrics for the update; only sent for clients configured to `sendMetrics`
});
You can also listen for whenever the server computes a new version of the model:
fedServer.onNewVersion((oldVersion, newVersion) => {
console.log(`updated model from ${oldVersion} to ${newVersion}`);
});
Robustness:
median and trimmed-mean aggregations (for Byzantine-robustness)Privacy:
FAQs
Experimental server library for federated learning in Tensorflow.js
We found that federated-learning-server demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Socket CEO Feross Aboukhadijeh joins Insecure Agents to discuss CVE remediation and why supply chain attacks require a different security approach.

Security News
Tailwind Labs laid off 75% of its engineering team after revenue dropped 80%, as LLMs redirect traffic away from documentation where developers discover paid products.

Security News
The planned feature introduces a review step before releases go live, following the Shai-Hulud attacks and a rocky migration off classic tokens that disrupted maintainer workflows.