Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

github.com/attilabuti/bittorrentdht

Package Overview
Dependencies
Alerts
File Explorer
Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

github.com/attilabuti/bittorrentdht

  • v0.0.0-20221126100816-ed6810637607
  • Source
  • Go
  • Socket score

Version published
Created
Source

BitTorrent DHT

build status codecov Go Report Card Go Reference license

Go implementation of the BitTorrent DHT protocol. BitTorrent DHT is the main peer discovery layer for BitTorrent, which allows for trackerless torrents.

Installation

$ go get github.com/attilabuti/bittorrentdht@latest

Usage

For more information, please see the Package Docs.

package main

import (
	"fmt"
)

func main() {
}

KBucket

Kademlia DHT K-bucket implementation as a binary tree. Ported from Tristan Slominski's k-bucket.

Overview

A Distributed Hash Table (DHT) is a decentralized distributed system that provides a lookup table similar to a hash table.

k-bucket is an implementation of a storage mechanism for keys within a DHT. It stores contact objects which represent locations and addresses of nodes in the decentralized distributed system. contact objects are typically identified by a SHA-1 hash, however this restriction is lifted in this implementation. Additionally, node ids of different lengths can be compared.

This Kademlia DHT k-bucket implementation is meant to be as minimal as possible. It assumes that contact objects consist only of id. It is useful, and necessary, to attach other properties to a contact. For example, one may want to attach ip and port properties, which allow an application to send IP traffic to the contact. However, this information is extraneous and irrelevant to the operation of a k-bucket.

arbiter function

This k-bucket implementation implements a conflict resolution mechanism using an arbiter function. The purpose of the arbiter is to choose between two contact objects with the same id but perhaps different properties and determine which one should be stored. As the arbiter function returns the actual object to be stored, it does not need to make an either/or choice, but instead could perform some sort of operation and return the result as a new object that would then be stored. See kBucket.update(node, index, contact) for detailed semantics of which contact (incumbent or candidate) is selected.

For example, an arbiter function implementing a VectorClock mechanism would look something like:

// Contact example
contact := Contact{
    Id: []byte("contactId"),
    VectorClock: 0
};

func arbiter(incumbent Contact, candidate Contact) Contact {
	if incumbent.VectorClock > candidate.VectorClock {
		return incumbent
	}

	return candidate
}

Documentation

For more information, please see the Package Docs.

Implementation of a Kademlia DHT k-bucket used for storing contact (peer node) information.

For a step by step example of k-bucket operation you may find the following slideshow useful: Distribute All The Things.

KBucket starts off as a single k-bucket with capacity of k. As contacts are added, once the k+1 contact is added, the k-bucket is split into two k-buckets. The split happens according to the first bit of the contact node id. The k-bucket that would contain the local node id is the "near" k-bucket, and the other one is the "far" k-bucket. The "far" k-bucket is marked as don't split in order to prevent further splitting. The contact nodes that existed are then redistributed along the two new k-buckets and the old k-bucket becomes an inner node within a tree data structure.

As even more contacts are added to the "near" k-bucket, the "near" k-bucket will split again as it becomes full. However, this time it is split along the second bit of the contact node id. Again, the two newly created k-buckets are marked "near" and "far" and the "far" k-bucket is marked as don't split. Again, the contact nodes that existed in the old bucket are redistributed. This continues as long as nodes are being added to the "near" k-bucket, until the number of splits reaches the length of the local node id.

As more contacts are added to the "far" k-bucket and it reaches its capacity, it does not split. Instead, the k-bucket emits a "ping" event (register a listener: emitter.On("kbucket.ping", function (old Contacts, new Contact) {...}); and includes a slice of old contact nodes that it hasn't heard from in a while and requires you to confirm that those contact nodes still respond (literally respond to a PING RPC). If an old contact node still responds, it should be re-added (kBucket.Add(old Contact)) back to the k-bucket. This puts the old contact on the "recently heard from" end of the list of nodes in the k-bucket. If the old contact does not respond, it should be removed (kBucket.Remove(oldContact.Id []byte)) and the new contact being added now has room to be stored (kBucket.Add(new Contact)).

Events
  • kbucket.added

    • newContact Contact: The new contact that was added.
    • Emitted only when "newContact" was added to bucket and it was not stored in the bucket before.
  • kbucket.ping

    • old Contacts: The slice of contacts to ping.
    • new Contact: The new contact to be added if one of old contacts does not respond.
    • Emitted every time a contact is added that would exceed the capacity of a "don't split" k-bucket it belongs to.
  • kbucket.removed

    • contact Contact: The contact that was removed.
    • Emitted when "contact" was removed from the bucket.
  • kbucket.updated

    • old Contact: The contact that was stored prior to the update.
    • new Contact: The new contact that is now stored after the update.
    • Emitted when a previously existing ("previously existing" means "oldContact.id" equals "newContact.id") contact was added to the bucket and it was replaced with "newContact".

Krpc

Low-level implementation of the k-rpc protocol. Ported from Mathias Buus's k-rpc.

Documentation

For more information, please see the Package Docs.

Events
  • krpc.listening

    • addr net.Addr: The address where the Krpc is listening.
    • Emitted when the Krpc starts listening for incoming packets.
  • krpc.close

    • err error: The error that caused the connection to close.
    • Emitted when the connection is closed.
  • krpc.ping

    • oldNodes Contacts: The slice of contacts to ping.
    • swapNew func(deadNode Contact):
    • Emitted when the bucket is full. Try and oldNodes and if one of them fails call swapNew with that node to swap if for a newer one.
  • krpc.node

    • node Contact: The node that was added.
    • Emitted when a node is added to the kbucket.
  • krpc.query

    • msg Message: The query message.
    • addr net.Addr: The address of the node that sent the query.
    • Emitted when a query is received.
  • krpc.error

    • err error: The error that occurred.
    • Emitted when an error has occured.

Further reading

Issues

Submit the issues if you find any bug or have any suggestion.

Contribution

Fork the repo and submit pull requests.

License

This extension is licensed under the MIT License.

FAQs

Package last updated on 26 Nov 2022

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc