HOPR
A project by the HOPR Association
HOPR is a privacy-preserving messaging protocol which enables the creation of a secure communication network via relay nodes powered by economic incentives using digital tokens.
Table of Contents
Getting Started
A good place to start is the
Getting Started guide on YouTube which walks through the following
instructions using GitPod.
Install
The following instructions show how the latest community release may be
installed. The instructions should be adapted if you want to use the latest
development release or any other older release.
Install via NPM
Using the hoprd npm package:
mkdir MY_NEW_HOPR_TEST_FOLDER
cd MY_NEW_HOPR_TEST_FOLDER
npm install @hoprnet/hoprd@1.73
Install via Docker
All our docker images can be found in our Google Cloud Container Registry.
Each image is prefixed with gcr.io/hoprassociation/$PROJECT:$RELEASE
.
The latest
tag represents the master
branch, while the latest-constantine
tag
represents the most recent release/*
branch.
You can pull the Docker image like so:
docker pull gcr.io/hoprassociation/hoprd:latest-constantine
For ease of use you can set up a shell alias to run the latest release as a docker container:
alias hoprd='docker run --pull always -ti -v ${HOPRD_DATA_DIR:-$HOME/.hoprd-db}:/app/db -p 9091:9091 -p 3000:3000 -p 3001:3001 gcr.io/hoprassociation/hoprd:latest-constantine'
IMPORTANT: Using the above command will map the database folder used by hoprd to a local folder called .hoprd-db
in your home directory. You can customize the location of that folder further by executing the following command:
HOPRD_DATA_DIR=${HOME}/.hoprd-better-db-folder eval hoprd
Also all ports are mapped to your local host, assuming you stick to the default port numbers.
NOTE: This setup should only be used for development or if you know what you
are doing and don't neetd further supported. Otherwise you should use the npm
or docker
setup.
You will need to clone the hoprnet
repo first:
git clone https://github.com/hoprnet/hoprnet
If you have direnv set up properly your nix-shell
will be
configured automatically upon entering the hoprnet
directory and enabling it
via direnv allow
. Otherwise you must enter the nix-shell
manually:
nix develop
Now you may follow the instructions in Develop.
Using
The hoprd
provides various command-line switches to configure its behaviour. For reference these are documented here as well:
$ hoprd --help
Options:
--help Show help [boolean]
--version Show version number [boolean]
--network Which network to run the HOPR node on [choices: "ETHEREUM"] [default: "ETHEREUM"]
--host The network host to run the HOPR node on. [default: "0.0.0.0:9091"]
--announce Announce public IP to the network [boolean] [default: false]
--admin Run an admin interface on localhost:3000, requires --apiToken [boolean] [default: false]
--rest Run a rest interface on localhost:3001, requires --apiToken [boolean] [default: false]
--restHost Updates the host for the rest server [default: "localhost"]
--restPort Updates the port for the rest server [default: 3001]
--healthCheck Run a health check end point on localhost:8080 [boolean] [default: false]
--healthCheckHost Updates the host for the healthcheck server [default: "localhost"]
--healthCheckPort Updates the port for the healthcheck server [default: 8080]
--forwardLogs Forwards all your node logs to a public available sink [boolean] [default: false]
--forwardLogsProvider A provider url for the logging sink node to use [default: "https://ceramic-clay.3boxlabs.com"]
--password A password to encrypt your keys [default: ""]
--apiToken (experimental) A REST API token and admin panel password for user authentication [string]
--identity The path to the identity file [default: "/home/tbr/.hopr-identity"]
--run Run a single hopr command, same syntax as in hopr-admin [default: ""]
--dryRun List all the options used to run the HOPR node, but quit instead of starting [boolean] [default: false]
--data manually specify the database directory to use [default: ""]
--init initialize a database if it doesn't already exist [boolean] [default: false]
--privateKey A private key to be used for your node wallet, to quickly boot your node [string] [default: undefined]
--adminHost Host to listen to for admin console [default: "localhost"]
--adminPort Port to listen to for admin console [default: 3000]
--environment Environment id to run in [string] [default: defined by release]
--testAnnounceLocalAddresses For testing local testnets. Announce local addresses. [boolean] [default: false]
--testPreferLocalAddresses For testing local testnets. Prefer local peers to remote. [boolean] [default: false]
--testUseWeakCrypto weaker crypto for faster node startup [boolean] [default: false]
--testNoAuthentication (experimental) disable remote authentication
As you might have noticed running the node without any command-line argument might not work depending on the installation method used. Here are examples to run a node with some safe configurations set.
Using NPM
The following command assumes you've setup a local installation like described in Install via NPM.
cd MY_NEW_HOPR_TEST_FOLDER
DEBUG=hopr* npx hoprd --admin --init --announce --identity .hopr-identity --password switzerland --forwardLogs --apiToken <MY_TOKEN>
Here is a short break-down of each argument.
hoprd
--admin
--init
--announce
--identity .hopr-identity
--password switzerland
--forwardLogs
--apiToken <MY_TOKEN>
Using Docker
The following command assumes you've setup an alias like described in Install via Docker.
hoprd --identity /app/db/.hopr-identity --password switzerland --init --announce --host "0.0.0.0:9091" --admin --adminHost 0.0.0.0 --forwardLogs --apiToken <MY_TOKEN> --environment jungfrau
Here is a short break-down of each argument.
hoprd
--identity /app/db/.hopr-identity
--password switzerland
--init
--announce
--host "0.0.0.0:9091"
--admin
--adminHost 0.0.0.0
--forwardLogs
--apiToken <MY_TOKEN>
--environment jungfrau
Migrating between releases
At the moment we DO NOT HAVE backward compatibility between releases.
We attempt to provide instructions on how to migrate your tokens between releases.
- Set your automatic channel strategy to
MANUAL
. - Close all open payment channels.
- Once all payment channels have closed, withdraw your funds to an external
wallet.
- Run
info
and take note of the network name. - Once funds are confirmed to exist in a different wallet, backup
.hopr-identity
and .db
folders. - Launch new
HOPRd
instance using latest release, this will create new .hopr-identity
and .db
folders, observe the account address. - Only tranfer funds to new
HOPRd
instance if HOPRd
operates on the same network as last release, you can compare the two networks using info
.
Develop
yarn
yarn build
yarn run:network
DEBUG=hopr* yarn run:hoprd:alice
DEBUG=hopr* yarn run:hoprd:bob
yarn run:faucet:all
Test
Unit testing
We use mocha for our tests. You can run our test suite across all
packages using the following command:
yarn test
To run tests of a single package (e.g. hoprd) execute:
yarn --cwd packages/hoprd test
To run tests of a single test suite (e.g. Identity) within a
package (e.g. hoprd) execute:
For instance, to run only the Identity
test suite in hoprd
, you need to
run the following:
yarn --cwd packages/hoprd test --grep "Identity"
In a similar fashion, our contracts can be tested in isolation. For now, you
need to pass the file to be tested, as hardhat does not support --grep
yarn test:contracts test/HoprChannels.spec.ts
In case a package you need to test is not included in our package.json
,
please feel free to update it as needed.
Test-driven development
To make sure we add the least amount of untested code to our codebase,
whenever possible all code should come accompanied by a test. To do so,
locate the .spec
or equivalent test file for your code. If it does not
exist, create it within the same file your code will live in.
Afterwards, ensure you create a breaking test for your feature. For example,
the following commit added a test to a non-existing feature. The
immediate commit provided the actual feature for that given test. Repeat
this process for all the code you add to our codebase.
(The code was pushed as an example, but ideally, you only push code that has
working tests on your machine, as to avoid overusing our CI pipeline with
known broken tests.)
Github Actions CI
We run a fair amount of automation using Github Actions. To ease development
of these workflows one can use act to run workflows locally in a
Docker environment.
E.g. running the build workflow:
act -j build
For more information please refer to act's documentation.
End-to-End Testing
Running Tests Locally
End-to-end testing is usually performed by the CI, but can also be performed
locally by executing:
./scripts/run-integration-tests-source.sh
Read the full help information of the script in case of questions:
./scripts/run-integration-tests-source.sh --help
That command will spawn multiple hoprd
nodes locally from the local
source-code and run the tests against this cluster of nodes. The tests can be
found in the files test/*.sh
. The script will cleanup all nodes once completed
unless instructed otherwise.
An alternative to using the local source-code is running the tests against
a NPM package.
./scripts/run-integration-tests-npm.sh
If no parameter is given the NPM package which correlates to the most recent Git
tag will be used, otherwise the first parameter is used as the NPM package
version to test.
Read the full help information of the script in case of questions:
./scripts/run-integration-tests-npm.sh --help
Running Tests on Google Cloud Platform
In some unique cases, some bugs might not had been picked up by our end-to-end
testing and instead only show up when deployed to production. To avoid having
to see these only after a time consuming build, a cluster of nodes can be
deployed to Google Cloud Platform which is then used to run tests against it.
A requirement for this setup is a working gcloud
configuration locally.
The easiest approach would be to authenticate with gcloud auth login
.
The cluster creation and tests can be run with:
FUNDING_PRIV_KEY=mysecretaccountprivkey \
./scripts/run-integration-tests-gcloud.sh
The given account private key is used to fund the test nodes to be able to
perform throughout the tests. Thus the account must have enough funds available.
Read the full help information of the script in case of questions:
./scripts/run-integration-tests-gcloud.sh --help
Deploy
The deployment nodes and networks is mostly orchestrated through the script
files in scripts/
which are executed by the Github Actions CI workflows.
Therefore, all common and minimal networks do not require manual steps to be
deployed.
Using Google Cloud Platform
However, sometimes it is useful to deploy additional nodes or specific versions
of hoprd
. To accomplish that its possible to create a cluster on GCP using the
following scripts:
./scripts/setup-gcloud-cluster.sh my-custom-cluster-without-name
Read the full help information of the script in case of questions:
./scripts/setup-gcloud-cluster.sh --help
The script requires a few environment variables to be set, but will inform the
user if one is missing. It will create a cluster of 6 nodes. By default these
nodes will use the latest Docker image of hoprd
and run on the Goerli
network. Different versions and different target networks can be configured
through the parameters and environment variables.
To launch nodes using the xDai
network one would execute (with the
placeholders replaced accordingly):
HOPRD_PROVIDER="<URL_TO_AN_XDAI_ENDPOINT>" \
HOPRD_TOKEN_CONTRACT="<ADDRESS_OF_TOKEN_CONTRACT_ON_XDAI>" \
./scripts/setup-gcloud-cluster.sh my-custom-cluster-without-name
A previously started cluster can be destroyed, which includes all running nodes,
by using the same script but setting the cleanup switch:
HOPRD_PERFORM_CLEANUP=true \
./scripts/setup-gcloud-cluster.sh my-custom-cluster-without-name
Using Google Cloud Platform and a Default Topology
The creation of a hoprd
cluster on GCP can be enhanced by providing a topology
script to the creation script:
./scripts/setup-gcloud-cluster.sh \
my-custom-cluster-without-name \
gcr.io/hoprassociation/hoprd:latest \
`pwd`/scripts/topologies/full_interconnected_cluster.sh
After the normal cluster creation the topology script will then open channels
between all nodes so they are fully interconnected. Custom topology scripts can
be easily added and used in the same manner. Refer to the referenced scripts as
a guideline on how to get started.
Tooling
As some tools are only partially supported, please tag the respective team member
whenever you need an issue about a particular tool.
Maintainer | Technology |
---|
@jjperezaguinaga | Visual Code |
@tolbrino | Nix |
Contact
License
GPL v3 © HOPR Association