Package chainhash provides abstracted hash functionality. This package provides a generic hash type and associated functions that allows the specific hash algorithm to be abstracted.
Provides symmetric authenticated encryption using 256-bit AES-GCM with a random nonce. Provides a recommended hashing algorithm. The hash function is HMAC-SHA512/256 where SHA512/256 is as described in FIPS 180-4. This construction avoids length-extension attacks while maintaining a widely compatible digest size with better performance on 64-bit systems. Password hashing uses bcrypt with a work factor of 14. Provides encoding and decoding routines for various cryptographic structures. Provides message authentication and asymmetric signatures. Message authentication: HMAC SHA512/256 This is a slight twist on the highly dependable HMAC-SHA256 that gains performance on 64-bit systems and consistency with our hashing recommendation. Asymmetric Signature: ECDSA using P256 and SHA256 ECDSA is the best compromise between cryptographic concerns and support for our internal use cases (e.g. RFC7518). The Go standard library implementation has some protection against entropy problems, but is not deterministic. See https://github.com/golang/go/commit/8d7bf2291b095d3a2ecaa2609e1101be46d80deb Provides a recommended TLS configuration.
Package highwayhash implements the pseudo-random-function (PRF) HighwayHash. HighwayHash is a fast hash function designed to defend hash-flooding attacks or to authenticate short-lived messages. HighwayHash is not a general purpose cryptographic hash function and does not provide (strong) collision resistance.
Package cuckoo provides a Cuckoo Filter, a Bloom filter replacement for approximated set-membership queries. While Bloom filters are well-known space-efficient data structures to serve queries like "if item x is in a set?", they do not support deletion. Their variances to enable deletion (like counting Bloom filters) usually require much more space. Cuckoo filters provide the flexibility to add and remove items dynamically. A cuckoo filter is based on cuckoo hashing (and therefore named as cuckoo filter). It is essentially a cuckoo hash table storing each key's fingerprint. Cuckoo hash tables can be highly compact, thus a cuckoo filter could use less space than conventional Bloom filters, for applications that require low false positive rates (< 3%). For details about the algorithm and citations please use this article: "Cuckoo Filter: Better Than Bloom" by Bin Fan, Dave Andersen and Michael Kaminsky (https://www.cs.cmu.edu/~dga/papers/cuckoo-conext2014.pdf) Note: This implementation uses a a static bucket size of 4 fingerprints and a fingerprint size of 1 byte based on my understanding of an optimal bucket/fingerprint/size ratio from the aforementioned paper.
Package dcrutil provides decred-specific convenience functions and types. A Block defines a Decred block that provides easier and more efficient manipulation of raw wire protocol blocks. It also memoizes hashes for the block and its transactions on their first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. A Tx defines a Decred transaction that provides more efficient manipulation of raw wire protocol transactions. It memoizes the hash for the transaction on its first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. The Address interface provides an abstraction for a Decred address. While the most common type is a pay-to-pubkey-hash, Decred already supports others and may well support more in the future. This package currently provides implementations for the pay-to-pubkey, pay-to-pubkey-hash, and pay-to-script-hash address types. To decode/encode an address:
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decreds. Errors returned by this package are of type txscript.Error. This allows the caller to programmatically determine the specific error by examining the ErrorCode field of the type asserted txscript.Error while still providing rich error messages with contextual information. A convenience function named IsErrorCode is also provided to allow callers to easily check for a specific error code. See ErrorCode in the package documentation for a full list.
Package ripemd160 implements the RIPEMD-160 hash algorithm.
Package dcrutil provides decred-specific convenience functions and types. A Block defines a Decred block that provides easier and more efficient manipulation of raw wire protocol blocks. It also memoizes hashes for the block and its transactions on their first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. A Tx defines a Decred transaction that provides more efficient manipulation of raw wire protocol transactions. It memoizes the hash for the transaction on its first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. The Address interface provides an abstraction for a Decred address. While the most common type is a pay-to-pubkey-hash, Decred already supports others and may well support more in the future. This package currently provides implementations for the pay-to-pubkey, pay-to-pubkey-hash, and pay-to-script-hash address types.
Package dcrutil provides decred-specific convenience functions and types. A Block defines a Decred block that provides easier and more efficient manipulation of raw wire protocol blocks. It also memoizes hashes for the block and its transactions on their first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. A Tx defines a Decred transaction that provides more efficient manipulation of raw wire protocol transactions. It memoizes the hash for the transaction on its first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. The Address interface provides an abstraction for a Decred address. While the most common type is a pay-to-pubkey-hash, Decred already supports others and may well support more in the future. This package currently provides implementations for the pay-to-pubkey, pay-to-pubkey-hash, and pay-to-script-hash address types.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decreds. Errors returned by this package are of type txscript.Error. This allows the caller to programmatically determine the specific error by examining the ErrorCode field of the type asserted txscript.Error while still providing rich error messages with contextual information. A convenience function named IsErrorCode is also provided to allow callers to easily check for a specific error code. See ErrorCode in the package documentation for a full list.
Package vugu provides core functionality including vugu->go codegen and in-browser DOM syncing running in WebAssembly. See http://www.vugu.org/ Since Vugu projects can have both client-side (running in WebAssembly) as well as server-side functionality many of the items in this package are available in both environments. Some however are either only available or only generally useful in one environment. Common functionality includes the ComponentType interface, and ComponentInst struct corresponding to an instantiated componnet. VGNode and related structs are used to represent a virtual Document Object Model. It is based on golang.org/x/net/html but with additional fields needed for Vugu. Data hashing is performed by ComputeHash() and can be customized by implementing the DataHasher interface. Client-side code uses JSEnv to maintain a render loop and regenerate virtual DOM and efficiently synchronize it with the browser as needed. DOMEvent is a wrapper around events from the browser and EventEnv is used to synchronize data access when writing event handler code that spawns goroutines. Where appropriate, server-side stubs are available so components can be compiled for both client (WebAssembly) and server (server-side rendering and testing). Server-side code can use ParserGo and ParserGoPkg to parse .vugu files and code generate a corresponding .go file. StaticHTMLEnv can be used to generate static HTML, similar to the output of JSEnv but can be run on the server. Supported features are approximately the same minus event handling, unapplicable to static output.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decred. Errors returned by this package are of type txscript.ErrorKind wrapped by txscript.Error which has full support for the standard library errors.Is and errors.As functions. This allows the caller to programmatically determine the specific error while still providing rich error messages with contextual information. See the constants defined with ErrorKind in the package documentation for a full list.
Package multihash is the Go implementation of https://github.com/multiformats/multihash, or self-describing hashes.
Package digest provides a generalized type to opaquely represent message digests and their operations within the registry. The Digest type is designed to serve as a flexible identifier in a content-addressable system. More importantly, it provides tools and wrappers to work with hash.Hash-based digests with little effort. The format of a digest is simply a string with two parts, dubbed the "algorithm" and the "digest", separated by a colon: An example of a sha256 digest representation follows: The "algorithm" portion defines both the hashing algorithm used to calculate the digest and the encoding of the resulting digest, which defaults to "hex" if not otherwise specified. Currently, all supported algorithms have their digests encoded in hex strings. In the example above, the string "sha256" is the algorithm and the hex bytes are the "digest". Because the Digest type is simply a string, once a valid Digest is obtained, comparisons are cheap, quick and simple to express with the standard equality operator. The main benefit of using the Digest type is simple verification against a given digest. The Verifier interface, modeled after the stdlib hash.Hash interface, provides a common write sink for digest verification. After writing is complete, calling the Verifier.Verified method will indicate whether or not the stream of bytes matches the target digest. In addition to the above, we intend to add the following features to this package: 1. A Digester type that supports write sink digest calculation. 2. Suspend and resume of ongoing digest calculations to support efficient digest verification in the registry.
Amino is an encoding library that can handle interfaces (like protobuf "oneof") well. This is achieved by prefixing bytes before each "concrete type". A concrete type is some non-interface value (generally a struct) which implements the interface to be (de)serialized. Not all structures need to be registered as concrete types -- only when they will be stored in interface type fields (or interface type slices) do they need to be registered. All interfaces and the concrete types that implement them must be registered. Notice that an interface is represented by a nil pointer. Structures that must be deserialized as pointer values must be registered with a pointer value as well. It's OK to (de)serialize such structures in non-pointer (value) form, but when deserializing such structures into an interface field, they will always be deserialized as pointers. All registered concrete types are encoded with leading 4 bytes (called "prefix bytes"), even when it's not held in an interface field/element. In this way, Amino ensures that concrete types (almost) always have the same canonical representation. The first byte of the prefix bytes must not be a zero byte, so there are 2**(8*4)-2**(8*3) possible values. When there are 4096 types registered at once, the probability of there being a conflict is ~ 0.2%. See https://instacalc.com/51189 for estimation. This is assuming that all registered concrete types have unique natural names (e.g. prefixed by a unique entity name such as "com.tendermint/", and not "mined/grinded" to produce a particular sequence of "prefix bytes"). TODO Update instacalc.com link with 255/256 since 0x00 is an escape. Do not mine/grind to produce a particular sequence of prefix bytes, and avoid using dependencies that do so. Since 4 bytes are not sufficient to ensure no conflicts, sometimes it is necessary to prepend more than the 4 prefix bytes for disambiguation. Like the prefix bytes, the disambiguation bytes are also computed from the registered name of the concrete type. There are 3 disambiguation bytes, and in binary form they always precede the prefix bytes. The first byte of the disambiguation bytes must not be a zero byte, so there are 2**(8*3)-2**(8*2) possible values. The prefix bytes never start with a zero byte, so the disambiguation bytes are escaped with 0x00. Notice that the 4 prefix bytes always immediately precede the binary encoding of the concrete type. To compute the disambiguation bytes, we take `hash := sha256(concreteTypeName)`, and drop the leading 0x00 bytes. In the example above, hash has two leading 0x00 bytes, so we drop them. The first 3 bytes are called the "disambiguation bytes" (in angle brackets). The next 4 bytes are called the "prefix bytes" (in square brackets).
Package consistent provides a consistent hashing function with bounded loads. This implementation also adds partitioning logic on top of the original algorithm. For more information about the underlying algorithm, please take a look at https://research.googleblog.com/2017/04/consistent-hashing-with-bounded-loads.html Example Use: Now you can create a new Consistent instance. This function can take a list of the members. In the following sample, you add a new Member to the consistent hash ring. myMember is just a Go struct that implements the Member interface. You should know that modifying the consistent hash ring distributes partitions among members using the algorithm defined on Google Research Blog. Remove a member from the consistent hash ring: LocateKey hashes the key and calculates partition ID with this modulo operation: MOD(hash result, partition count) The owner of the partition is already calculated by New/Add/Remove. LocateKey just returns the member that is responsible for the key.
Package merkletree implements a Merkle Tree capable of storing arbitrary content. A Merkle Tree is a hash tree that provides an efficient way to verify the contents of a set data are present and untampered with. At its core, a Merkle Tree is a list of items representing the data that should be verified. Each of these items is inserted into a leaf node and a tree of hashes is constructed bottom up using a hash of the nodes left and right children's hashes. This means that the root node will effictively be a hash of all other nodes (hashes) in the tree. This property allows the tree to be reproduced and thus verified by on the hash of the root node of the tree. The benefit of the tree structure is verifying any single content entry in the tree will require only nlog2(n) steps in the worst case. Creating a new merkletree requires that the type that the tree will be constructed from implements the Content interface. A slice of the Content items should be created and then passed to the NewTree method. t represents the Merkle Tree and can be verified and manipulated with the API methods described below.
Package graph contains generic implementations of basic graph algorithms. The algorithms in this library can be applied to any graph data structure implementing the two Iterator methods: Order, which returns the number of vertices, and Visit, which iterates over the neighbors of a vertex. All algorithms operate on directed graphs with a fixed number of vertices, labeled from 0 to n-1, and edges with integer cost. An undirected edge {v, w} of cost c is represented by the two directed edges (v, w) and (w, v), both of cost c. A self-loop, an edge connecting a vertex to itself, is both directed and undirected. The type Mutable represents a directed graph with a fixed number of vertices and weighted edges that can be added or removed. The implementation uses hash maps to associate each vertex in the graph with its adjacent vertices. This gives constant time performance for all basic operations. The type Immutable is a compact representation of an immutable graph. The implementation uses lists to associate each vertex in the graph with its adjacent vertices. This makes for fast and predictable iteration: the Visit method produces its elements by reading from a fixed sorted precomputed list. This type supports multigraphs. The subpackage graph/build offers a tool for building virtual graphs. In a virtual graph no vertices or edges are stored in memory, they are instead computed as needed. New virtual graphs are constructed by composing and filtering a set of standard graphs, or by writing functions that describe the edges of a graph. The Basics example shows how to build a plain graph and how to efficiently use the Visit iterator, the key abstraction of this package. The DFS example contains a full implementation of depth-first search. Build a plain graph and visit all of its edges. Show how to use this package by implementing a complete depth-first search.
Package safebrowsing implements a client for the Safe Browsing API v4. API v4 emphasizes efficient usage of the network for bandwidth-constrained applications such as mobile devices. It achieves this by maintaining a small portion of the server state locally such that some queries can be answered immediately without any network requests. Thus, fewer API calls made, means less bandwidth is used. At a high-level, the implementation does the following: Essentially the query is presented to three major components: The database, the cache, and the API. Each of these may satisfy the query immediately, or may say that it does not know and that the query should be satisfied by the next component. The goal of the database and cache is to satisfy as many queries as possible to avoid using the API. Starting with a user query, a hash of the query is performed to preserve privacy regarded the exact nature of the query. For example, if the query was for a URL, then this would be the SHA256 hash of the URL in question. Given a query hash, we first check the local database (which is periodically synced with the global Safe Browsing API servers). This database will either tell us that the query is definitely safe, or that it does not have enough information. If we are unsure about the query, we check the local cache, which can be used to satisfy queries immediately if the same query had been made recently. The cache will tell us that the query is either safe, unsafe, or unknown (because the it's not in the cache or the entry expired). If we are still unsure about the query, then we finally query the API server, which is guaranteed to return to us an authoritative answer, assuming no networking failures. For more information, see the API developer's guide:
Package immutable provides immutable collection types. Immutable collections provide an efficient, safe way to share collections of data while minimizing locks. The collections in this package provide List, Map, and SortedMap implementations. These act similarly to slices and maps, respectively, except that altering a collection returns a new copy of the collection with that change. Because collections are unable to change, they are safe for multiple goroutines to read from at the same time without a mutex. However, these types of collections come with increased CPU & memory usage as compared with Go's built-in collection types so please evaluate for your specific use. The List type provides an API similar to Go slices. They allow appending, prepending, and updating of elements. Elements can also be fetched by index or iterated over using a ListIterator. The Map & SortedMap types provide an API similar to Go maps. They allow values to be assigned to unique keys and allow for the deletion of keys. Values can be fetched by key and key/value pairs can be iterated over using the appropriate iterator type. Both map types provide the same API. The SortedMap, however, provides iteration over sorted keys while the Map provides iteration over unsorted keys. Maps improved performance and memory usage as compared to SortedMaps. Map types require the use of a Hasher implementation to calculate hashes for their keys and check for key equality. SortedMaps require the use of a Comparer implementation to sort keys in the map. These collection types automatically provide built-in hasher and comparers for int, string, and byte slice keys. If you are using one of these key types then simply pass a nil into the constructor. Otherwise you will need to implement a custom Hasher or Comparer type. Please see the provided implementations for reference.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decred. The errors returned by this package are of type txscript.ErrorKind wrapped by txscript.Error which has full support for the standard library errors.Is and errors.As functions. This allows the caller to programmatically determine the specific error while still providing rich error messages with contextual information. See the constants defined with ErrorKind in the package documentation for a full list.
Package dcrutil provides decred-specific convenience functions and types. A Block defines a Decred block that provides easier and more efficient manipulation of raw wire protocol blocks. It also memoizes hashes for the block and its transactions on their first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. A Tx defines a Decred transaction that provides more efficient manipulation of raw wire protocol transactions. It memoizes the hash for the transaction on its first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. The Address interface provides an abstraction for a Decred address. While the most common type is a pay-to-pubkey-hash, Decred already supports others and may well support more in the future. This package currently provides implementations for the pay-to-pubkey, pay-to-pubkey-hash, and pay-to-script-hash address types.
Package fastsha256 implements the SHA224 and SHA256 hash algorithms as defined in FIPS 180-4.
Package jump implements Google's Jump Consistent Hash From the paper "A Fast, Minimal Memory, Consistent Hash Algorithm" by John Lamping, Eric Veach (2014). http://arxiv.org/abs/1406.2294
Package blake2b implements BLAKE2b cryptographic hash function.
Package dht implements a Distributed Hash Table (DHT) part of the BitTorrent protocol, as specified by BEP 5: http://www.bittorrent.org/beps/bep_0005.html BitTorrent uses a "distributed hash table" (DHT) for storing peer contact information for "trackerless" torrents. In effect, each peer becomes a tracker. The protocol is based on Kademila DHT protocol and is implemented over UDP. Please note the terminology used to avoid confusion. A "peer" is a client/server listening on a TCP port that implements the BitTorrent protocol. A "node" is a client/server listening on a UDP port implementing the distributed hash table protocol. The DHT is composed of nodes and stores the location of peers. BitTorrent clients include a DHT node, which is used to contact other nodes in the DHT to get the location of peers to download from using the BitTorrent protocol. Standard use involves creating a Server, and calling Announce on it with the details of your local torrent client and infohash of interest.
Package bloom provides data structures and methods for creating Bloom filters. A Bloom filter is a representation of a set of _n_ items, where the main requirement is to make membership queries; _i.e._, whether an item is a member of a set. A Bloom filter has two parameters: _m_, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and _k_, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo _m_). Set membership is done by _testing_ whether the bits at each value of the hashing functions (again, modulo _m_) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose _k_ and _m_ correctly. In this implementation, the hashing functions used is murmurhash, a non-cryptographic hashing function. This implementation accepts keys for setting as testing as []byte. Thus, to add a string item, "Love": Similarly, to test if "Love" is in bloom: For numeric data, I recommend that you look into the binary/encoding library. But, for example, to add a uint32 to the filter: Finally, there is a method to estimate the false positive rate of a particular Bloom filter for a set of size _n_: Given the particular hashing scheme, it's best to be empirical about this. Note that estimating the FP rate will clear the Bloom filter.
Package blake3 implements the BLAKE3 cryptographic hash function.
Package ringpop is a library that maintains a consistent hash ring atop a gossip-based membership protocol. It can be used by applications to arbitrarily shard data in a scalable and fault-tolerant manner.
Package redisc implements a redis cluster client on top of the redigo client package. It supports all commands that can be executed on a redis cluster, including pub-sub, scripts and read-only connections to read data from replicas. See http://redis.io/topics/cluster-spec for details. The package defines two main types: Cluster and Conn. Both are described in more details below, but the Cluster manages the mapping of keys (or more exactly, hash slots computed from keys) to a group of nodes that form a redis cluster, and a Conn manages a connection to this cluster. The package is designed such that for simple uses, or when keys have been carefully named to play well with a redis cluster, a Cluster value can be used as a drop-in replacement for a redis.Pool from the redigo package. Similarly, the Conn type implements redigo's redis.Conn interface (and the augmented redis.ConnWithTimeout one too), so the API to execute commands is the same - in fact the redisc package uses the redigo package as its only third-party dependency. When more control is needed, the package offers some extra behaviour specific to working with a redis cluster: Slot and SplitBySlot functions to compute the slot for a given key and to split a list of keys into groups of keys from the same slot, so that each group can safely be handled using the same connection. *Conn.Bind (or the BindConn package-level helper function) to explicitly specify the keys that will be used with the connection so that the right node is selected, instead of relying on the automatic detection based on the first parameter of the command. *Conn.ReadOnly (or the ReadOnlyConn package-level helper function) to mark a connection as read-only, allowing commands to be served by a replica instead of the master. RetryConn to wrap a connection into one that automatically follows redirections when the cluster moves slots around. Helper functions to deal with cluster-specific errors. The Cluster type manages a redis cluster and offers an interface compatible with redigo's redis.Pool: Along with some additional methods specific to a cluster: If the CreatePool function field is set, then a redis.Pool is created to manage connections to each of the cluster's nodes. A call to Get returns a connection from that pool. The Dial method, on the other hand, guarantees that the returned connection will not be managed by a pool, even if CreatePool is set. It calls redigo's redis.Dial function to create the unpooled connection, passing along any DialOptions set on the cluster. If the cluster's CreatePool field is nil, Get behaves the same as Dial. The Refresh method refreshes the cluster's internal mapping of hash slots to nodes. It should typically be called only once, after the cluster is created and before it is used, so that the first connections already benefit from smart routing. It is automatically kept up-to-date based on the redis MOVED responses afterwards. The EachNode method visits each node in the cluster and calls the provided function with a connection to that node, which may be useful to run diagnostics commands on each node or to collect keys across the whole cluster. The Stats method returns the pool statistics for each node, with the node's address as key of the map. A cluster must be closed once it is no longer used to release its resources. The connection returned from Get or Dial is a redigo redis.Conn interface (that also implements redis.ConnWithTimeout), with a concrete type of *Conn. In addition to the interface's required methods, *Conn adds the following methods: The returned connection is not yet connected to any node; it is "bound" to a specific node only when a call to Do, Send, Receive or Bind is made. For Do, Send and Receive, the node selection is implicit, it uses the first parameter of the command, and computes the hash slot assuming that first parameter is a key. It then binds the connection to the node corresponding to that slot. If there are no parameters for the command, or if there is no command (e.g. in a call to Receive), a random node is selected. Bind is explicit, it gives control to the caller over which node to select by specifying a list of keys that the caller wishes to handle with the connection. All keys must belong to the same slot, and the connection must not already be bound to a node, otherwise an error is returned. On success, the connection is bound to the node holding the slot of the specified key(s). Because the connection is returned as a redis.Conn interface, a type assertion must be used to access the underlying *Conn and to be able to call Bind: The BindConn package-level function is provided as a helper for this common use-case. The ReadOnly method marks the connection as read-only, meaning that it will attempt to connect to a replica instead of the master node for its slot. Once bound to a node, the READONLY redis command is sent automatically, so it doesn't have to be sent explicitly before use. ReadOnly must be called before the connection is bound to a node, otherwise an error is returned. For the same reason as for Bind, a type assertion must be used to call ReadOnly on a *Conn, so a package-level helper function is also provided, ReadOnlyConn. There is no ReadWrite method, because it can be sent as a normal redis command and will essentially end that connection (all commands will now return MOVED errors). If the connection was wrapped in a RetryConn call, then it will automatically follow the redirection to the master node (see the Redirections section). The connection must be closed after use, to release the underlying resources. The redis cluster may return MOVED and ASK errors when the node that received the command doesn't currently hold the slot corresponding to the key. The package cannot reliably handle those redirections automatically because the redirection error may be returned for a pipeline of commands, some of which may have succeeded. However, a connection can be wrapped by a call to RetryConn, which returns a redis.Conn interface where only calls to Do, Close and Err can succeed. That means pipelining is not supported, and only a single command can be executed at a time, but it will automatically handle MOVED and ASK replies, as well as TRYAGAIN errors. Note that even if RetryConn is not used, the cluster always updates its mapping of slots to nodes automatically by keeping track of MOVED replies. The concurrency model is similar to that of the redigo package: Cluster methods are safe to call concurrently (like redis.Pool). Connections do not support concurrent calls to write methods (Send, Flush) or concurrent calls to the read method (Receive). Connections do allow a concurrent reader and writer. Because the Do method combines the functionality of Send, Flush and Receive, it cannot be called concurrently with other methods. The Bind and ReadOnly methods are safe to call concurrently, but there is not much point in doing so for as both will fail if the connection is already bound. Create and use a cluster.
Package update provides functionality to implement secure, self-updating Go programs (or other single-file targets). For complete updating solutions please see Equinox (https://equinox.io) and go-tuf (https://github.com/flynn/go-tuf). This example shows how to update a program remotely from a URL. Go binaries can often be large. It can be advantageous to only ship a binary patch to a client instead of the complete program text of a new version. This example shows how to update a program with a bsdiff binary patch. Other patch formats may be applied by implementing the Patcher interface. Updating executable code on a computer can be a dangerous operation unless you take the appropriate steps to guarantee the authenticity of the new code. While checksum verification is important, it should always be combined with signature verification (next section) to guarantee that the code came from a trusted party. selfupdate validates SHA256 checksums by default, but this is pluggable via the Hash property on the Options struct. This example shows how to guarantee that the newly-updated binary is verified to have an appropriate checksum (that was otherwise retrieved via a secure channel) specified as a hex string. Cryptographic verification of new code from an update is an extremely important way to guarantee the security and integrity of your updates. Verification is performed by validating the signature of a hash of the new file. This means nothing changes if you apply your update with a patch. This example shows how to add signature verification to your updates. To make all of this work an application distributor must first create a public/private key pair and embed the public key into their application. When they issue a new release, the issuer must sign the new executable file with the private key and distribute the signature along with the selfupdate. In order to update a Go application with selfupdate, you must distribute it as a single executable. This is often easy, but some applications require static assets (like HTML and CSS asset files or TLS certificates). In order to update applications like these, you'll want to make sure to embed those asset files into the distributed binary with a tool like go-bindata (my favorite): https://github.com/jteeuwen/go-bindata Mechanisms and protocols for determining whether an update should be applied and, if so, which one are out of scope for this package. Please consult go-tuf (https://github.com/flynn/go-tuf) or Equinox (https://equinox.io) for more complete solutions. selfupdate only works for self-updating applications that are distributed as a single binary, i.e. applications that do not have additional assets or dependency files. Updating application that are distributed as multiple on-disk files is out of scope, although this may change in future versions of this library.
Package identicon 一个基于 hash 值生成随机图像的包 identicon 并没有统一的标准,一般用于在用户注册时, 取用户的邮箱或是访问 IP 等数据(也可以是其它任何数据), 进行 hash 运算,之后根据 hash 数据,产生一张图像, 这样即可以为用户产生一张独特的头像,又不会泄漏用户的隐藏。 在 identicon 中,把图像分成以下九个部分: 其中 1、3、9、7 为不同角度(依次增加 90 度)的同一张图片, 2、6、8、4 也是如此,这样可以保持图像是对称的,比较美观。 5 则单独使用一张图片。 NOTE: go test 会在当前目录的 testdata 文件夹下产生大量的随机图片。 要运行测试,必须保证该文件夹是存在的,且有相应的写入权限。
Package graph is a library for creating generic graph data structures and modifying, analyzing, and visualizing them. A graph consists of vertices of type T, which are identified by a hash value of type K. The hash value for a given vertex is obtained using the hashing function passed to New. A hashing function takes a T and returns a K. For primitive types like integers, you may use a predefined hashing function such as IntHash – a function that takes an integer and uses that integer as the hash value at the same time: For storing custom data types, you need to provide your own hashing function. This example takes a City instance and returns its name as the hash value: Creating a graph using this hashing function will yield a graph of vertices of type City identified by hash values of type string. Adding vertices to a graph of integers is simple. graph.Graph.AddVertex takes a vertex and adds it to the graph. Most functions accept and return only hash values instead of entire instances of the vertex type T. For example, graph.Graph.AddEdge creates an edge between two vertices and accepts the hash values of those vertices. Because this graph uses the IntHash hashing function, the vertex values and hash values are the same. All operations that modify the graph itself are methods of Graph. All other operations are top-level functions of by this library. For detailed usage examples, take a look at the README.
Package pointer implements Andersen's analysis, an inclusion-based pointer analysis algorithm first described in (Andersen, 1994). A pointer analysis relates every pointer expression in a whole program to the set of memory locations to which it might point. This information can be used to construct a call graph of the program that precisely represents the destinations of dynamic function and method calls. It can also be used to determine, for example, which pairs of channel operations operate on the same channel. The package allows the client to request a set of expressions of interest for which the points-to information will be returned once the analysis is complete. In addition, the client may request that a callgraph is constructed. The example program in example_test.go demonstrates both of these features. Clients should not request more information than they need since it may increase the cost of the analysis significantly. Our algorithm is INCLUSION-BASED: the points-to sets for x and y will be related by pts(y) ⊇ pts(x) if the program contains the statement y = x. It is FLOW-INSENSITIVE: it ignores all control flow constructs and the order of statements in a program. It is therefore a "MAY ALIAS" analysis: its facts are of the form "P may/may not point to L", not "P must point to L". It is FIELD-SENSITIVE: it builds separate points-to sets for distinct fields, such as x and y in struct { x, y *int }. It is mostly CONTEXT-INSENSITIVE: most functions are analyzed once, so values can flow in at one call to the function and return out at another. Only some smaller functions are analyzed with consideration of their calling context. It has a CONTEXT-SENSITIVE HEAP: objects are named by both allocation site and context, so the objects returned by two distinct calls to f: are distinguished up to the limits of the calling context. It is a WHOLE PROGRAM analysis: it requires SSA-form IR for the complete Go program and summaries for native code. See the (Hind, PASTE'01) survey paper for an explanation of these terms. The analysis is fully sound when invoked on pure Go programs that do not use reflection or unsafe.Pointer conversions. In other words, if there is any possible execution of the program in which pointer P may point to object O, the analysis will report that fact. By default, the "reflect" library is ignored by the analysis, as if all its functions were no-ops, but if the client enables the Reflection flag, the analysis will make a reasonable attempt to model the effects of calls into this library. However, this comes at a significant performance cost, and not all features of that library are yet implemented. In addition, some simplifying approximations must be made to ensure that the analysis terminates; for example, reflection can be used to construct an infinite set of types and values of those types, but the analysis arbitrarily bounds the depth of such types. Most but not all reflection operations are supported. In particular, addressable reflect.Values are not yet implemented, so operations such as (reflect.Value).Set have no analytic effect. The pointer analysis makes no attempt to understand aliasing between the operand x and result y of an unsafe.Pointer conversion: It is as if the conversion allocated an entirely new object: The analysis cannot model the aliasing effects of functions written in languages other than Go, such as runtime intrinsics in C or assembly, or code accessed via cgo. The result is as if such functions are no-ops. However, various important intrinsics are understood by the analysis, along with built-ins such as append. The analysis currently provides no way for users to specify the aliasing effects of native code. ------------------------------------------------------------------------ The remaining documentation is intended for package maintainers and pointer analysis specialists. Maintainers should have a solid understanding of the referenced papers (especially those by H&L and PKH) before making making significant changes. The implementation is similar to that described in (Pearce et al, PASTE'04). Unlike many algorithms which interleave constraint generation and solving, constructing the callgraph as they go, this implementation for the most part observes a phase ordering (generation before solving), with only simple (copy) constraints being generated during solving. (The exception is reflection, which creates various constraints during solving as new types flow to reflect.Value operations.) This improves the traction of presolver optimisations, but imposes certain restrictions, e.g. potential context sensitivity is limited since all variants must be created a priori. A type is said to be "pointer-like" if it is a reference to an object. Pointer-like types include pointers and also interfaces, maps, channels, functions and slices. We occasionally use C's x->f notation to distinguish the case where x is a struct pointer from x.f where is a struct value. Pointer analysis literature (and our comments) often uses the notation dst=*src+offset to mean something different than what it means in Go. It means: for each node index p in pts(src), the node index p+offset is in pts(dst). Similarly *dst+offset=src is used for store constraints and dst=src+offset for offset-address constraints. Nodes are the key datastructure of the analysis, and have a dual role: they represent both constraint variables (equivalence classes of pointers) and members of points-to sets (things that can be pointed at, i.e. "labels"). Nodes are naturally numbered. The numbering enables compact representations of sets of nodes such as bitvectors (or BDDs); and the ordering enables a very cheap way to group related nodes together. For example, passing n parameters consists of generating n parallel constraints from caller+i to callee+i for 0<=i<n. The zero nodeid means "not a pointer". For simplicity, we generate flow constraints even for non-pointer types such as int. The pointer equivalence (PE) presolver optimization detects which variables cannot point to anything; this includes not only all variables of non-pointer types (such as int) but also variables of pointer-like types if they are always nil, or are parameters to a function that is never called. Each node represents a scalar part of a value or object. Aggregate types (structs, tuples, arrays) are recursively flattened out into a sequential list of scalar component types, and all the elements of an array are represented by a single node. (The flattening of a basic type is a list containing a single node.) Nodes are connected into a graph with various kinds of labelled edges: simple edges (or copy constraints) represent value flow. Complex edges (load, store, etc) trigger the creation of new simple edges during the solving phase. Conceptually, an "object" is a contiguous sequence of nodes denoting an addressable location: something that a pointer can point to. The first node of an object has a non-nil obj field containing information about the allocation: its size, context, and ssa.Value. Objects include: Many objects have no Go types. For example, the func, map and chan type kinds in Go are all varieties of pointers, but their respective objects are actual functions (executable code), maps (hash tables), and channels (synchronized queues). Given the way we model interfaces, they too are pointers to "tagged" objects with no Go type. And an *ssa.Global denotes the address of a global variable, but the object for a Global is the actual data. So, the types of an ssa.Value that creates an object is "off by one indirection": a pointer to the object. The individual nodes of an object are sometimes referred to as "labels". For uniformity, all objects have a non-zero number of fields, even those of the empty type struct{}. (All arrays are treated as if of length 1, so there are no empty arrays. The empty tuple is never address-taken, so is never an object.) An tagged object has the following layout: The T node's typ field is the dynamic type of the "payload": the value v which follows, flattened out. The T node's obj has the otTagged flag. Tagged objects are needed when generalizing across types: interfaces, reflect.Values, reflect.Types. Each of these three types is modelled as a pointer that exclusively points to tagged objects. Tagged objects may be indirect (obj.flags ⊇ {otIndirect}) meaning that the value v is not of type T but *T; this is used only for reflect.Values that represent lvalues. (These are not implemented yet.) Variables of the following "scalar" types may be represented by a single node: basic types, pointers, channels, maps, slices, 'func' pointers, interfaces. Pointers: Nothing to say here, oddly. Basic types (bool, string, numbers, unsafe.Pointer): Currently all fields in the flattening of a type, including non-pointer basic types such as int, are represented in objects and values. Though non-pointer nodes within values are uninteresting, non-pointer nodes in objects may be useful (if address-taken) because they permit the analysis to deduce, in this example, that p points to s.x. If we ignored such object fields, we could only say that p points somewhere within s. All other basic types are ignored. Expressions of these types have zero nodeid, and fields of these types within aggregate other types are omitted. unsafe.Pointers are not modelled as pointers, so a conversion of an unsafe.Pointer to *T is (unsoundly) treated equivalent to new(T). Channels: An expression of type 'chan T' is a kind of pointer that points exclusively to channel objects, i.e. objects created by MakeChan (or reflection). 'chan T' is treated like *T. *ssa.MakeChan is treated as equivalent to new(T). *ssa.Send and receive (*ssa.UnOp(ARROW)) and are equivalent to store Maps: An expression of type 'map[K]V' is a kind of pointer that points exclusively to map objects, i.e. objects created by MakeMap (or reflection). map K[V] is treated like *M where M = struct{k K; v V}. *ssa.MakeMap is equivalent to new(M). *ssa.MapUpdate is equivalent to *y=x where *y and x have type M. *ssa.Lookup is equivalent to y=x.v where x has type *M. Slices: A slice []T, which dynamically resembles a struct{array *T, len, cap int}, is treated as if it were just a *T pointer; the len and cap fields are ignored. *ssa.MakeSlice is treated like new([1]T): an allocation of a *ssa.Index on a slice is equivalent to a load. *ssa.IndexAddr on a slice returns the address of the sole element of the slice, i.e. the same address. *ssa.Slice is treated as a simple copy. Functions: An expression of type 'func...' is a kind of pointer that points exclusively to function objects. A function object has the following layout: There may be multiple function objects for the same *ssa.Function due to context-sensitive treatment of some functions. The first node is the function's identity node. Associated with every callsite is a special "targets" variable, whose pts() contains the identity node of each function to which the call may dispatch. Identity words are not otherwise used during the analysis, but we construct the call graph from the pts() solution for such nodes. The following block of contiguous nodes represents the flattened-out types of the parameters ("P-block") and results ("R-block") of the function object. The treatment of free variables of closures (*ssa.FreeVar) is like that of global variables; it is not context-sensitive. *ssa.MakeClosure instructions create copy edges to Captures. A Go value of type 'func' (i.e. a pointer to one or more functions) is a pointer whose pts() contains function objects. The valueNode() for an *ssa.Function returns a singleton for that function. Interfaces: An expression of type 'interface{...}' is a kind of pointer that points exclusively to tagged objects. All tagged objects pointed to by an interface are direct (the otIndirect flag is clear) and concrete (the tag type T is not itself an interface type). The associated ssa.Value for an interface's tagged objects may be an *ssa.MakeInterface instruction, or nil if the tagged object was created by an instrinsic (e.g. reflection). Constructing an interface value causes generation of constraints for all of the concrete type's methods; we can't tell a priori which ones may be called. TypeAssert y = x.(T) is implemented by a dynamic constraint triggered by each tagged object O added to pts(x): a typeFilter constraint if T is an interface type, or an untag constraint if T is a concrete type. A typeFilter tests whether O.typ implements T; if so, O is added to pts(y). An untagFilter tests whether O.typ is assignable to T,and if so, a copy edge O.v -> y is added. ChangeInterface is a simple copy because the representation of tagged objects is independent of the interface type (in contrast to the "method tables" approach used by the gc runtime). y := Invoke x.m(...) is implemented by allocating contiguous P/R blocks for the callsite and adding a dynamic rule triggered by each tagged object added to pts(x). The rule adds param/results copy edges to/from each discovered concrete method. (Q. Why do we model an interface as a pointer to a pair of type and value, rather than as a pair of a pointer to type and a pointer to value? A. Control-flow joins would merge interfaces ({T1}, {V1}) and ({T2}, {V2}) to make ({T1,T2}, {V1,V2}), leading to the infeasible and type-unsafe combination (T1,V2). Treating the value and its concrete type as inseparable makes the analysis type-safe.) Type parameters: Type parameters are not directly supported by the analysis. Calls to generic functions will be left as if they had empty bodies. Users of the package are expected to use the ssa.InstantiateGenerics builder mode when building code that uses or depends on code containing generics. reflect.Value: A reflect.Value is modelled very similar to an interface{}, i.e. as a pointer exclusively to tagged objects, but with two generalizations. 1. a reflect.Value that represents an lvalue points to an indirect (obj.flags ⊇ {otIndirect}) tagged object, which has a similar layout to an tagged object except that the value is a pointer to the dynamic type. Indirect tagged objects preserve the correct aliasing so that mutations made by (reflect.Value).Set can be observed. Indirect objects only arise when an lvalue is derived from an rvalue by indirection, e.g. the following code: Whether indirect or not, the concrete type of the tagged object corresponds to the user-visible dynamic type, and the existence of a pointer is an implementation detail. (NB: indirect tagged objects are not yet implemented) 2. The dynamic type tag of a tagged object pointed to by a reflect.Value may be an interface type; it need not be concrete. This arises in code such as this: pts(eface) is a singleton containing an interface{}-tagged object. That tagged object's payload is an interface{} value, i.e. the pts of the payload contains only concrete-tagged objects, although in this example it's the zero interface{} value, so its pts is empty. reflect.Type: Just as in the real "reflect" library, we represent a reflect.Type as an interface whose sole implementation is the concrete type, *reflect.rtype. (This choice is forced on us by go/types: clients cannot fabricate types with arbitrary method sets.) rtype instances are canonical: there is at most one per dynamic type. (rtypes are in fact large structs but since identity is all that matters, we represent them by a single node.) The payload of each *rtype-tagged object is an *rtype pointer that points to exactly one such canonical rtype object. We exploit this by setting the node.typ of the payload to the dynamic type, not '*rtype'. This saves us an indirection in each resolution rule. As an optimisation, *rtype-tagged objects are canonicalized too. Aggregate types: Aggregate types are treated as if all directly contained aggregates are recursively flattened out. Structs: *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create The nodes of a struct consist of a special 'identity' node (whose type is that of the struct itself), followed by the nodes for all the struct's fields, recursively flattened out. A pointer to the struct is a pointer to its identity node. That node allows us to distinguish a pointer to a struct from a pointer to its first field. Field offsets are logical field offsets (plus one for the identity node), so the sizes of the fields can be ignored by the analysis. (The identity node is non-traditional but enables the distinction described above, which is valuable for code comprehension tools. Typical pointer analyses for C, whose purpose is compiler optimization, must soundly model unsafe.Pointer (void*) conversions, and this requires fidelity to the actual memory layout using physical field offsets.) *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create Arrays: We model an array by an identity node (whose type is that of the array itself) followed by a node representing all the elements of the array; the analysis does not distinguish elements with different indices. Effectively, an array is treated like struct{elem T}, a load y=x[i] like y=x.elem, and a store x[i]=y like x.elem=y; the index i is ignored. A pointer to an array is pointer to its identity node. (A slice is also a pointer to an array's identity node.) The identity node allows us to distinguish a pointer to an array from a pointer to one of its elements, but it is rather costly because it introduces more offset constraints into the system. Furthermore, sound treatment of unsafe.Pointer would require us to dispense with this node. Arrays may be allocated by Alloc, by make([]T), by calls to append, and via reflection. Tuples (T, ...): Tuples are treated like structs with naturally numbered fields. *ssa.Extract is analogous to *ssa.Field. However, tuples have no identity field since by construction, they cannot be address-taken. There are three kinds of function call: Cases 1 and 2 apply equally to methods and standalone functions. Static calls: A static call consists three steps: A static function call is little more than two struct value copies between the P/R blocks of caller and callee: Context sensitivity: Static calls (alone) may be treated context sensitively, i.e. each callsite may cause a distinct re-analysis of the callee, improving precision. Our current context-sensitivity policy treats all intrinsics and getter/setter methods in this manner since such functions are small and seem like an obvious source of spurious confluences, though this has not yet been evaluated. Dynamic function calls: Dynamic calls work in a similar manner except that the creation of copy edges occurs dynamically, in a similar fashion to a pair of struct copies in which the callee is indirect: (Recall that the function object's P- and R-blocks are contiguous.) Interface method invocation: For invoke-mode calls, we create a params/results block for the callsite and attach a dynamic closure rule to the interface. For each new tagged object that flows to the interface, we look up the concrete method, find its function object, and connect its P/R blocks to the callsite's P/R blocks, adding copy edges to the graph during solving. Recording call targets: The analysis notifies its clients of each callsite it encounters, passing a CallSite interface. Among other things, the CallSite contains a synthetic constraint variable ("targets") whose points-to solution includes the set of all function objects to which the call may dispatch. It is via this mechanism that the callgraph is made available. Clients may also elect to be notified of callgraph edges directly; internally this just iterates all "targets" variables' pts(·)s. We implement Hash-Value Numbering (HVN), a pre-solver constraint optimization described in Hardekopf & Lin, SAS'07. This is documented in more detail in hvn.go. We intend to add its cousins HR and HU in future. The solver is currently a naive Andersen-style implementation; it does not perform online cycle detection, though we plan to add solver optimisations such as Hybrid- and Lazy- Cycle Detection from (Hardekopf & Lin, PLDI'07). It uses difference propagation (Pearce et al, SQC'04) to avoid redundant re-triggering of closure rules for values already seen. Points-to sets are represented using sparse bit vectors (similar to those used in LLVM and gcc), which are more space- and time-efficient than sets based on Go's built-in map type or dense bit vectors. Nodes are permuted prior to solving so that object nodes (which may appear in points-to sets) are lower numbered than non-object (var) nodes. This improves the density of the set over which the PTSs range, and thus the efficiency of the representation. Partly thanks to avoiding map iteration, the execution of the solver is 100% deterministic, a great help during debugging. Andersen, L. O. 1994. Program analysis and specialization for the C programming language. Ph.D. dissertation. DIKU, University of Copenhagen. David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Efficient field-sensitive pointer analysis for C. In Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE '04). ACM, New York, NY, USA, 37-42. http://doi.acm.org/10.1145/996821.996835 David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Online Cycle Detection and Difference Propagation: Applications to Pointer Analysis. Software Quality Control 12, 4 (December 2004), 311-337. http://dx.doi.org/10.1023/B:SQJO.0000039791.93071.a2 David Grove and Craig Chambers. 2001. A framework for call graph construction algorithms. ACM Trans. Program. Lang. Syst. 23, 6 (November 2001), 685-746. http://doi.acm.org/10.1145/506315.506316 Ben Hardekopf and Calvin Lin. 2007. The ant and the grasshopper: fast and accurate pointer analysis for millions of lines of code. In Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation (PLDI '07). ACM, New York, NY, USA, 290-299. http://doi.acm.org/10.1145/1250734.1250767 Ben Hardekopf and Calvin Lin. 2007. Exploiting pointer and location equivalence to optimize pointer analysis. In Proceedings of the 14th international conference on Static Analysis (SAS'07), Hanne Riis Nielson and Gilberto Filé (Eds.). Springer-Verlag, Berlin, Heidelberg, 265-280. Atanas Rountev and Satish Chandra. 2000. Off-line variable substitution for scaling points-to analysis. In Proceedings of the ACM SIGPLAN 2000 conference on Programming language design and implementation (PLDI '00). ACM, New York, NY, USA, 47-56. DOI=10.1145/349299.349310 http://doi.acm.org/10.1145/349299.349310 This program demonstrates how to use the pointer analysis to obtain a conservative call-graph of a Go program. It also shows how to compute the points-to set of a variable, in this case, (C).f's ch parameter.
Package dht implements a Distributed Hash Table (DHT) part of the BitTorrent protocol, as specified by BEP 5: http://www.bittorrent.org/beps/bep_0005.html BitTorrent uses a "distributed hash table" (DHT) for storing peer contact information for "trackerless" torrents. In effect, each peer becomes a tracker. The protocol is based on Kademila DHT protocol and is implemented over UDP. Please note the terminology used to avoid confusion. A "peer" is a client/server listening on a TCP port that implements the BitTorrent protocol. A "node" is a client/server listening on a UDP port implementing the distributed hash table protocol. The DHT is composed of nodes and stores the location of peers. BitTorrent clients include a DHT node, which is used to contact other nodes in the DHT to get the location of peers to download from using the BitTorrent protocol. Standard use involves creating a Server, and calling Announce on it with the details of your local torrent client and infohash of interest.
Package argon2id provides a convience wrapper around Go's golang.org/x/crypto/argon2 implementation, making it simpler to securely hash and verify passwords using Argon2. It enforces use of the Argon2id algorithm variant and cryptographically-secure random salts.
Goversion scans a directory tree and, for every executable it finds, prints the Go version used to build that executable. Usage: The list of paths can be individual files or directories; if the latter, goversion scans all files in the directory tree, not following symlinks. Goversion scans inside of tar or gzipped tar archives that it finds (named *.tar, *.tar.gz, or *.tgz), but not recursively. The -crypto flag causes goversion to print additional information about the crypto libraries linked into each executable. The -m flag causes goversion to print the list of modules found in the executable, along with version information. The -mh flag causes goversion to print the list of modules found in the executable, along with version and hash information. The -v flag causes goversion to print information about every file it considers. Scan /usr/bin for Go binaries and print their versions:
Package redigomock is a mock for redigo library (redis client) Redigomock basically register the commands with the expected results in a internal global variable. When the command is executed via Conn interface, the mock will look to this global variable to retrieve the corresponding result. To start a mocked connection just do the following: Now you can inject it whenever your system needs a redigo.Conn because it satisfies all interface requirements. Before running your tests you need beyond of mocking the connection, registering the expected results. For that you can generate commands with the expected results. As the Expect method from Command receives anything (interface{}), another method was created to easy map the result to your structure. For that use ExpectMap: You should also test the error cases, and you can do it in the same way of a normal result. Sometimes you will want to register a command regardless the arguments, and you can do it with the method GenericCommand (mainly with the HMSET). All commands are registered in a global variable, so they will be there until all your test cases ends. So for good practice in test writing you should in the beginning of each test case clear the mock states. Let's see a full test example. Imagine a Person structure and a function that pick up this person in Redis using redigo library (file person.go): Now we need to test it, so let's create the corresponding test with redigomock (fileperson_test.go): When you use redis as a persistent list, then you might want to call the same redis command multiple times. For example: To test it, you can chain redis responses. Let's write a test case: In the first iteration of the loop redigomock would return "www.some.url.com", then "www.another.url.com" and finally redis.ErrNil. Sometimes providing expected arguments to redigomock at compile time could be too constraining. Let's imagine you use redis hash sets to store some data, along with the timestamp of the last data update. Let's expand our Person struct: And add a function updating personal data (phone number for example). Please notice that the update timestamp can't be determined at compile time: Unit test: As you can see at the position of current timestamp redigomock is told to match AnyInt struct created by NewAnyInt() method. AnyInt struct will match any integer passed to redigomock from the tested method. Please see fuzzyMatch.go file for more details.
Package pgpwordlist provides the pgpwordlist for easier hashes/seeds. This package provides the pgpwordlist for use when generating a human readable hash or seed as well as functions for working with it.
Package structhash creates hash strings from arbitrary go data structures.
An implementation of Consistent Hashing and Consistent Hashing With Bounded Loads. https://en.wikipedia.org/wiki/Consistent_hashing https://research.googleblog.com/2017/04/consistent-hashing-with-bounded-loads.html
Package sortedset provides the data-struct allows fast access the element in set by key or by score(order). It is inspired by Sorted Set from Redis. Every node in the set is associated with these properties. Each node in the set is associated with a key. While keys are unique, scores may be repeated. Nodes are taken in order (from low score to high score) instead of ordered afterwards. If scores are the same, the node is ordered by its key in lexicographic order. Each node in the set also can be accessed by rank, which represents the position in the sorted set. Sorted Set is implemented basing on skip list and hash map internally. With sorted sets you can add, remove, or update nodes in a very fast way (in a time proportional to the logarithm of the number of nodes). You can also get ranges by score or by rank (position) in a very fast way. Accessing the middle of a sorted set is also very fast, so you can use Sorted Sets as a smart list of non repeating nodes where you can quickly access everything you need: nodes in order, fast existence test, fast access to nodes in the middle! A typical use case of sorted set is a leader board in a massive online game, where every time a new score is submitted you update it using AddOrUpdate() method. You can easily take the top users using GetByRankRange() method, you can also, given an user id, return its rank in the listing using FindRank() method. Using FindRank() and GetByRankRange() together you can show users with a score similar to a given user. All very quickly. Examples
Package murmur3 provides an amd64 native (Go generic fallback) implementation of the murmur3 hash algorithm for strings and slices. Assembly is provided for amd64 go1.5+; pull requests are welcome for other architectures.
Package passlib provides a simple password hashing and verification interface abstracting multiple password hashing schemes. After initialisation, most people need concern themselves only with the functions Hash and Verify, which uses the default context and sensible defaults. You should initialise the library before using it with the following line. See func UseDefaults for details.
Goversion scans a directory tree and, for every executable it finds, prints the Go version used to build that executable. Usage: The list of paths can be individual files or directories; if the latter, goversion scans all files in the directory tree, not following symlinks. Goversion scans inside of tar or gzipped tar archives that it finds (named *.tar, *.tar.gz, or *.tgz), but not recursively. The -crypto flag causes goversion to print additional information about the crypto libraries linked into each executable. The -m flag causes goversion to print the list of modules found in the executable, along with version information. The -mh flag causes goversion to print the list of modules found in the executable, along with version and hash information. The -v flag causes goversion to print information about every file it considers. Scan /usr/bin for Go binaries and print their versions:
Amino is an encoding library that can handle interfaces (like protobuf "oneof") well. This is achieved by prefixing bytes before each "concrete type". A concrete type is some non-interface value (generally a struct) which implements the interface to be (de)serialized. Not all structures need to be registered as concrete types -- only when they will be stored in interface type fields (or interface type slices) do they need to be registered. All interfaces and the concrete types that implement them must be registered. Notice that an interface is represented by a nil pointer. Structures that must be deserialized as pointer values must be registered with a pointer value as well. It's OK to (de)serialize such structures in non-pointer (value) form, but when deserializing such structures into an interface field, they will always be deserialized as pointers. All registered concrete types are encoded with leading 4 bytes (called "prefix bytes"), even when it's not held in an interface field/element. In this way, Amino ensures that concrete types (almost) always have the same canonical representation. The first byte of the prefix bytes must not be a zero byte, so there are 2**(8*4)-2**(8*3) possible values. When there are 4096 types registered at once, the probability of there being a conflict is ~ 0.2%. See https://instacalc.com/51189 for estimation. This is assuming that all registered concrete types have unique natural names (e.g. prefixed by a unique entity name such as "com.tendermint/", and not "mined/grinded" to produce a particular sequence of "prefix bytes"). TODO Update instacalc.com link with 255/256 since 0x00 is an escape. Do not mine/grind to produce a particular sequence of prefix bytes, and avoid using dependencies that do so. Since 4 bytes are not sufficient to ensure no conflicts, sometimes it is necessary to prepend more than the 4 prefix bytes for disambiguation. Like the prefix bytes, the disambiguation bytes are also computed from the registered name of the concrete type. There are 3 disambiguation bytes, and in binary form they always precede the prefix bytes. The first byte of the disambiguation bytes must not be a zero byte, so there are 2**(8*3)-2**(8*2) possible values. The prefix bytes never start with a zero byte, so the disambiguation bytes are escaped with 0x00. Notice that the 4 prefix bytes always immediately precede the binary encoding of the concrete type. To compute the disambiguation bytes, we take `hash := sha256(concreteTypeName)`, and drop the leading 0x00 bytes. In the example above, hash has two leading 0x00 bytes, so we drop them. The first 3 bytes are called the "disambiguation bytes" (in angle brackets). The next 4 bytes are called the "prefix bytes" (in square brackets).
package nodb is a high performance embedded NoSQL. nodb supports various data structure like kv, list, hash and zset like redis. Other features include binlog replication, data with a limited time-to-live. First create a nodb instance before use: cfg is a Config instance which contains configuration for nodb use, like DataDir (root directory for nodb working to store data). After you create a nodb instance, you can select a DB to store you data: DB must be selected by a index, nodb supports only 16 databases, so the index range is [0-15]. KV is the most basic nodb type like any other key-value database. List is simply lists of values, sorted by insertion order. You can push or pop value on the list head (left) or tail (right). Hash is a map between fields and values. ZSet is a sorted collections of values. Every member of zset is associated with score, a int64 value which used to sort, from smallest to greatest score. Members are unique, but score may be same. nodb supports binlog, so you can sync binlog to another server for replication. If you want to open binlog support, set UseBinLog to true in config.