Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

polkadot-secure-validator

Package Overview
Dependencies
Maintainers
2
Versions
56
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

polkadot-secure-validator - npm Package Compare versions

Comparing version 3.0.1 to 3.1.0

ansible/roles/polkadot-validator/files/journald.conf

2

package.json
{
"name": "polkadot-secure-validator",
"version": "3.0.1",
"version": "3.1.0",
"main": "src/index.js",

@@ -5,0 +5,0 @@ "repository": "https://github.com/w3f/polkadot-secure-validator",

@@ -6,9 +6,6 @@ [![CircleCI](https://circleci.com/gh/w3f/polkadot-secure-validator.svg?style=svg)](https://circleci.com/gh/w3f/polkadot-secure-validator)

This repo describes a potential setup for a Polkadot validator that aims to
prevent some types of potential attacks, as described in the
[Polkadot Secure Validator approach](https://hackmd.io/QSJlqjZpQBihEU_ojmtR8g).
prevent some types of potential attacks at the TCP layer and below.
The [Workflow](#workflow) section describes the [Platform Layer](#platform-layer)
and the [Application Layer](#application-layer) in more detail.
![Polkadot Secure Network Chart](secure_network_chart.svg)
## Usage

@@ -37,30 +34,12 @@

The secure validator setup is composed of one or more validators and a set of
public nodes nodes connected to it. The validators are isolated from the internet
and only have access to the Polkadot network through
the public nodes.
The secure validator setup is composed of one or more validators that run with a local
instance of NGINX as a reverse TCP proxy in front of them. The validators are instructed to:
* advertise themselves with the public IP of the node and the port where the
reverse proxy is listening.
* bind to the localhost interface, so that they only allow incoming connections from the
proxy.
The connection between the validator nodes and the public nodes is performed
defining a VPN to which all these nodes belong. The Polkadot instance running in
the validator nodes are configured to only listen on the VPN-attached interface,
and uses the public node's VPN address in the `--reserved-nodes` parameter. It is
also protected by a firewall that only allows connections on the VPN port.
The setup also configures a firewall in which the default p2p port is closed for
incoming connections and only the proxy port is open.
This way, the only nodes allowed to connect to the validators are the public nodes
through the VPN. Messages sent by other validators can still reach it through
gossiping, and these validators can know the IP address of the secure validator
because of this, but can't directly connect to it without being part of the VPN.
*WARNING*
If you use this tool to create and/or configure your validator setup or
implement your setup based on this approach take into account that if you add
public telemetry endpoints to your nodes (either the validator or the public
nodes) then the IP address of the validator will be publicly available too,
given that the contents of the network state RPC call are sent to telemetry.
Even though the secure validator in this setup only has the VPN port open and
Wireguard has a reasonable [approach to mitigate DoS attacks](https://www.wireguard.com/protocol/#dos-mitigation),
we recommend to not send this information to endpoints publicly accessible.
## Workflow

@@ -73,6 +52,5 @@

Both validator and public nodes are created in a similar way using the terraform
modules located at [terraform](/terraform) directory. We have created code for
several providers but it is possible to add new ones, please reach out if you
are interested in any provider currently not available.
Validators are created using the terraform modules located at [terraform](/terraform)
directory. We have created code for several providers but it is possible to add new
ones, please reach out if you are interested in any provider currently not available.

@@ -84,373 +62,24 @@ Besides the actual machines the terraform modules create the minimum required networking

This is done through the ansible playbook and roles located at [ansible](/ansible), the
configuration applied depend on the type of node:
This is done through the ansible playbook and polkadot-validator role located at
[ansible](/ansible), basically the role performs these actions:
* Common:
* Software firewall setup, for the validator we only allow the proxy, SSH and, if
enabled, node-exporter ports.
* Configure journald to tune log storage.
* Create polkadot user and group.
* Configure NGINX proxy
* Setup polkadot service, including binary download.
* Polkadot session management, create session keys if they are not present.
* Setup node-exporter if the configuration includes it.
* Software firewall setup, for the validator we only allow the VPN and SSH
ports, for the public nodes VPN SSH and p2p ports.
# Note about upgrades from the sentries setup
* VPN setup: for the VPN solution we are using [WireGuard](https://github.com/WireGuard/WireGuard),
at this stage we create the private and public keys on each node, making the
public keys available to ansible.
The current version of polkadot-secure-validator doesn't allow to create and configure
sentry nodes. Although the terraform files and ansible roles of this latest version
can be applied on setups created with previous versions, the validators would be configured
to work without sentries and to connect to the network using the local reverse proxy instead.
* VPN install: we install and configure WireGuard on each host using the public
keys from the previous stage. The configuration for the validator looks like:
```
[Interface]
PrivateKey = <...>
ListenPort = 51820
SaveConfig = true
Address = 10.0.0.1/24
[Peer]
PublicKey = 8R7PTv1CdNLHRsDvrvE58Ac0Inc9vOLY2vFMWIFV/W4=
AllowedIPs = 10.0.0.2/32
Endpoint = 64.93.77.93:51820
[Peer]
PublicKey = ZZW6Wuk+YjJToeLHIUrp0HAqfNozgQfUMo2owC2Imzg=
AllowedIPs = 10.0.0.3/32
Endpoint = 50.81.184.50:51820
[Peer]
PublicKey = LZHKtuGCxz9iCoNNDmQzzNe9eF9aLXj/4yJRkFjCWzM=
AllowedIPs = 10.0.0.4/32
Endpoint = 45.243.244.130:51820
```
* Polkadot setup: create a Polkadot user and group and download the binary.
* Public nodes:
* Start Polkadot service: the public nodes are started and we make the libp2p peer
id of the node available to ansible. The generated systemd unit looks like:
```
[Unit]
Description=Polkadot Node
[Service]
ExecStart=/usr/local/bin/polkadot \
--name sv-public-0 \
--sentry
Restart=always
[Install]
WantedBy=multi-user.target
```
* Private (validator) nodes:
* Start Polkadot service: the private (validator) node is started with the node's VPN address as part
of the listen multiaddr and the multiaddr of the public nodes (with the peer id
from the previous stage and the VPN addresses) as `reserved-nodes`. It looks like:
```
[Unit]
Description=Polkadot Node
[Service]
ExecStart=/usr/local/bin/polkadot \
--name sv-private \
--validator \
--listen-addr=/ip4/10.0.0.1/tcp/30333 \
--reserved-nodes /ip4/10.0.0.2/tcp/30333/p2p/QmNpQbu2nKfHQMySnCue3XC9mAjBfzi8DQ9KvNwUM8jZdx \
--reserved-nodes /ip4/10.0.0.3/tcp/30333/p2p/QmY81TLZKeNj4mGDAhFQE6RrHEJPidAkccgUTsJo7ifNFJ \
--reserved-nodes /ip4/10.0.0.4/tcp/30333/p2p/QmTwMDJDnPyHUHV2fZFcVbNpYzp6Fu7LP6VhhK3Ei13iXr
Restart=always
[Install]
WantedBy=multi-user.target
```
## Scopes
This setup partitions the network in 3 separate kind of nodes: secure validator,
its public node and the regular network nodes, haveing each group a different
vision and accessibility to the rest of the network. To verify this, we'll execute
the `system_networkState` RPC call on nodes of each partition:
```
curl -H "Content-Type: application/json" --data '{ "jsonrpc":"2.0", "method":"system_networkState", "params":[],"id":1 }' localhost:9933
```
### Validator
It can only reach and be reached by its public nodes, from the
`system_networkState` RPC call:
```
{
[ ........ ]
"result": {
"connectedPeers": {
[ only validator's public nodes shown here]
"QmPjNcWNZjNrjVFzkNYR6jH7HLqyU7j9piczUyNoxce1fD": {
"enabled": true,
"endpoint": {
"dialing": "/ip4/10.0.0.2/tcp/30333"
},
"knownAddresses": [
"/ip6/::1/tcp/30333",
"/ip4/10.0.0.2/tcp/30333",
"/ip4/127.0.0.1/tcp/30333",
"/ip4/172.26.59.86/tcp/30333",
"/ip4/18.197.157.119/tcp/30333"
],
"latestPingTime": {
"nanos": 256512049,
"secs": 0
},
"open": true,
},
[ ........ ]
},
"notConnectedPeers": {
[ always known regular nodes: boot nodes, other validators, etc ]
"QmP3zYRhAxxw4fDf6Vq5agM8AZt1m2nKpPAEDmyEHPK5go": {
"knownAddresses": [
"/dns4/p2p.testnet-4.kusama.network/tcp/30100"
],
"latestPingTime": null,
"versionString": null
},
[ ........ ]
},
"peerset": {
[ all known nodes shown here, only reported connected to validator's public nodes ]
"QmPjNcWNZjNrjVFzkNYR6jH7HLqyU7j9piczUyNoxce1fD": {
"connected": true,
"reputation": 1114
},
"QmP3zYRhAxxw4fDf6Vq5agM8AZt1m2nKpPAEDmyEHPK5go": {
"connected": false,
"reputation": 0
},
[ ........ ]
"reserved_only": true
}
}
}
```
### Validator's public nodes
They can reach and be reached both by the validator and by the network regular
nodes:
```
{
[ ........ ]
"result": {
"connectedPeers": {
[ secure validator, other secure validator's public nodes and regular nodes]
"QmZSocEssLWHYCY6mqR99DcSFEpMb95fVeMsScrY8jqBm8": {
"enabled": true,
"endpoint": {
"listening": {
"listen_addr": "/ip4/10.0.0.2/tcp/30333",
"send_back_addr": "/ip4/10.0.0.1/tcp/54932"
}
},
"knownAddresses": [
"/ip4/10.0.0.1/tcp/30333",
"/ip4/10.0.1.18/tcp/30333",
"/ip4/147.75.199.231/tcp/30333",
"/ip4/10.0.1.152/tcp/30333"
],
"latestPingTime": {
"nanos": 335876602,
"secs": 0
},
"open": true,
},
"QmP3zYRhAxxw4fDf6Vq5agM8AZt1m2nKpPAEDmyEHPK5go": {
"enabled": true,
"endpoint": {
"listening": {
"listen_addr": "/ip4/172.26.59.86/tcp/30333",
"send_back_addr": "/ip4/191.232.49.216/tcp/3008"
}
},
"knownAddresses": [
"/dns4/p2p.testnet-4.kusama.network/tcp/30100",
"/ip4/127.0.0.1/tcp/30100",
"/ip4/10.244.0.10/tcp/30100",
"/ip4/191.232.49.216/tcp/30100"
],
"latestPingTime": {
"nanos": 603313251,
"secs": 0
},
"open": true,
},
[ ........ ]
},
"notConnectedPeers": {
[ regular nodes ]
"QmW45D6YLfctkSnsjyoqcSxw9qoiXUmAFGn5ea99L6SC7X": {
"knownAddresses": [
"/ip4/10.8.2.14/tcp/30101",
"/ip4/127.0.0.1/tcp/30101",
"/ip4/34.80.190.48/tcp/30101"
],
"latestPingTime": {
"nanos": 571989635,
"secs": 0
},
},
[ ........ ]
},
"peerset": {
[ all known nodes reported as connected here ]
"QmP3zYRhAxxw4fDf6Vq5agM8AZt1m2nKpPAEDmyEHPK5go": {
"connected": true,
"reputation": 1277
},
"QmZSocEssLWHYCY6mqR99DcSFEpMb95fVeMsScrY8jqBm8": {
"connected": true,
"reputation": -571
},
[ ........ ]
"reserved_only": false
}
}
}
```
### Network regular nodes
They can reach and be reached by the validator's public nodes and by other regular
nodes, the don't have access to the validator.
```
{
[ ........ ]
"result": {
"connectedPeers": {
[ secure validator's public nodes and regular nodes ]
"QmPjNcWNZjNrjVFzkNYR6jH7HLqyU7j9piczUyNoxce1fD": {
"enabled": true,
"endpoint": {
"listening": {
"listen_addr": "/ip4/10.44.1.11/tcp/30101",
"send_back_addr": "/ip4/18.197.157.119/tcp/42962"
}
},
"knownAddresses": [
"/ip4/172.26.59.86/tcp/30333",
"/ip4/127.0.0.1/tcp/30333",
"/ip6/::1/tcp/30333",
"/ip4/18.197.157.119/tcp/30333",
"/ip4/10.0.0.2/tcp/30333",
"/ip4/10.0.1.18/tcp/30333"
],
"latestPingTime": {
"nanos": 108101687,
"secs": 0
},
"open": true,
},
"QmP3zYRhAxxw4fDf6Vq5agM8AZt1m2nKpPAEDmyEHPK5go": {
"enabled": true,
"endpoint": {
"listening": {
"listen_addr": "/ip4/10.44.1.11/tcp/30101",
"send_back_addr": "/ip4/191.232.49.216/tcp/3010"
}
},
"knownAddresses": [
"/dns4/p2p.testnet-4.kusama.network/tcp/30100",
"/ip4/127.0.0.1/tcp/30100",
"/ip4/191.232.49.216/tcp/30100",
"/ip4/10.244.0.10/tcp/30100"
],
"latestPingTime": {
"nanos": 717286051,
"secs": 0
},
"open": true,
"versionString": "parity-polkadot/v0.5.0-4e53ad1-x86_64-linux-gnu (unknown)"
},
[ ........ ]
},
"notConnectedPeers": {
[ secure validator ]
"QmZSocEssLWHYCY6mqR99DcSFEpMb95fVeMsScrY8jqBm8": {
"knownAddresses": [
"/ip4/10.0.0.1/tcp/30333",
"/ip4/10.0.1.18/tcp/30333",
"/ip4/10.0.1.152/tcp/30333",
"/ip4/147.75.199.231/tcp/30333"
],
"latestPingTime": {
"nanos": 375552762,
"secs": 0
},
}
[ ........ ]
},
"peerset": {
[ all known nodes shown here, reported connected to all, secure validator with 0 reputation ]
"QmP3zYRhAxxw4fDf6Vq5agM8AZt1m2nKpPAEDmyEHPK5go": {
"connected": true,
"reputation": 1115
},
"QmPjNcWNZjNrjVFzkNYR6jH7HLqyU7j9piczUyNoxce1fD": {
"connected": true,
"reputation": 3500
},
"QmZSocEssLWHYCY6mqR99DcSFEpMb95fVeMsScrY8jqBm8": {
"connected": true,
"reputation": 0
},
[ ........ ]
"reserved_only": false
}
}
}
```
If you created the sentries with a previous version of this tool through terraform following
the complete workflow, then they will not be deleted automatically when running this new version.
In short, the old sentries will no longer be used by the validators and it will be up to you to
remove them manually.

@@ -12,4 +12,6 @@ const { Ansible } = require('./clients/ansible');

for (let counter = 0; counter < ansibleCfg.publicNodes.nodes.length; counter++) {
ansibleCfg.publicNodes.nodes[counter].ipAddresses = platformResult.publicNodesIpAddresses[counter];
if(ansibleCfg.publicNodes) {
for (let counter = 0; counter < ansibleCfg.publicNodes.nodes.length; counter++) {
ansibleCfg.publicNodes.nodes[counter].ipAddresses = platformResult.publicNodesIpAddresses[counter];
}
}

@@ -16,0 +18,0 @@

@@ -42,4 +42,19 @@ const path = require('path');

const target = path.join(buildDir, inventoryFileName);
const validators = this._genTplNodes(this.config.validators);
const publicNodes = this._genTplNodes(this.config.publicNodes, validators.length);
const validatorTelemetryUrl = this.config.validators.telemetryUrl;
const validatorLoggingFilter = this.config.validators.loggingFilter;
const polkadotAdditionalValidatorFlags = this.config.validators.additionalFlags;
let publicNodes = [];
let publicTelemetryUrl = '';
let publicLoggingFilter='';
let polkadotAdditionalPublicFlags = '';
if (this.config.publicNodes) {
publicNodes = this._genTplNodes(this.config.publicNodes, validators.length);
publicTelemetryUrl = this.config.publicNodes.telemetryUrl;
publicLoggingFilter = this.config.publicNodes.loggingFilter;
polkadotAdditionalPublicFlags = this.config.publicNodes.additionalFlags;
}
const data = {

@@ -56,7 +71,7 @@ project: this.config.project,

validatorTelemetryUrl: this.config.validators.telemetryUrl,
publicTelemetryUrl: this.config.publicNodes.telemetryUrl,
validatorTelemetryUrl,
publicTelemetryUrl,
validatorLoggingFilter: this.config.validators.loggingFilter,
publicLoggingFilter: this.config.publicNodes.loggingFilter,
validatorLoggingFilter,
publicLoggingFilter,

@@ -66,4 +81,4 @@ buildDir,

polkadotAdditionalCommonFlags: this.config.additionalFlags,
polkadotAdditionalValidatorFlags: this.config.validators.additionalFlags,
polkadotAdditionalPublicFlags: this.config.publicNodes.additionalFlags,
polkadotAdditionalValidatorFlags,
polkadotAdditionalPublicFlags,
};

@@ -70,0 +85,0 @@ if (this.config.nodeExporter && this.config.nodeExporter.enabled) {

@@ -41,6 +41,8 @@ const fs = require('fs-extra');

let publicNodeSyncPromises = [];
try {
publicNodeSyncPromises = await this._create('publicNode', sshKeys.publicNodePublicKey, this.config.publicNodes.nodes, method);
} catch(e) {
console.log(`Could not get publicNodes sync promises: ${e.message}`);
if(this.config.publicNodes){
try {
publicNodeSyncPromises = await this._create('publicNode', sshKeys.publicNodePublicKey, this.config.publicNodes.nodes, method);
} catch(e) {
console.log(`Could not get publicNodes sync promises: ${e.message}`);
}
}

@@ -61,9 +63,10 @@ const syncPromises = validatorSyncPromises.concat(publicNodeSyncPromises)

let publicNodesCleanPromises = []
try {
publicNodesCleanPromises = await this._destroy('publicNode', this.config.publicNodes.nodes);
} catch(e) {
console.log(`Could not get publicNodes clean promises: ${e.message}`);
let publicNodesCleanPromises = [];
if(this.config.publicNodes){
try {
publicNodesCleanPromises = await this._destroy('publicNode', this.config.publicNodes.nodes);
} catch(e) {
console.log(`Could not get publicNodes clean promises: ${e.message}`);
}
}
const cleanPromises = validatorCleanPromises.concat(publicNodesCleanPromises);

@@ -171,4 +174,6 @@

for (let counter = 0; counter < this.config.publicNodes.nodes.length; counter++) {
this._copyTerraformFiles('publicNode', counter, this.config.publicNodes.nodes[counter].provider);
if (this.config.publicNodes){
for (let counter = 0; counter < this.config.publicNodes.nodes.length; counter++) {
this._copyTerraformFiles('publicNode', counter, this.config.publicNodes.nodes[counter].provider);
}
}

@@ -175,0 +180,0 @@ }

@@ -16,4 +16,6 @@ const asyncUtils = require('./async.js');

const validatorIpAddresses = await this._extractOutput('validator', this.config.validators.nodes);
const publicNodesIpAddresses = await this._extractOutput('publicNode', this.config.publicNodes.nodes);
let publicNodesIpAddresses = [];
if(this.config.publicNodes){
publicNodesIpAddresses = await this._extractOutput('publicNode', this.config.publicNodes.nodes);
}
return { validatorIpAddresses, publicNodesIpAddresses };

@@ -20,0 +22,0 @@ }

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc