Package lane provides queue, priority queue, stack and deque data structures implementations. Its was designed with simplicity, performance, and concurrent usage in mind.
Package goncurses is a new curses (ncurses) library for the Go programming language. It implements all the ncurses extension libraries: form, menu and panel. Minimal operation would consist of initializing the display: It is important to always call End() before your program exits. If you fail to do so, the terminal will not perform properly and will either need to be reset or restarted completely. CAUTION: Calls to ncurses functions are normally not atomic nor reentrant and therefore extreme care should be taken to ensure ncurses functions are not called concurrently. Specifically, never write data to the same window concurrently nor accept input and send output to the same window as both alter the underlying C data structures in a non safe manner. Ideally, you should structure your program to ensure all ncurses related calls happen in a single goroutine. This is probably most easily achieved via channels and Go's built-in select. Alternatively, or additionally, you can use a mutex to protect any calls in multiple goroutines from happening concurrently. Failure to do so will result in unpredictable and undefined behaviour in your program. The examples directory contains demonstrations of many of the capabilities goncurses can provide.
Package goraph implements graph data structure and algorithms.
Package badgerhold is an indexing and querying layer on top of a badger DB. The goal is to allow easy, persistent storage and retrieval of Go types. badgerDB is an embedded key-value store, and badgerhold serves a similar use case however with a higher-level interface for common uses of Badger. BadgerHold deals directly with Go Types. When inserting data, you pass in your structure directly. When querying data you pass in a pointer to a slice of the type you want to return. By default, Gob encoding is used. You can put multiple different types into the same DB file and they (and their indexes) will be stored separately. BadgerHold will automatically create an index for any struct fields tags with "badgerholdIndex" The first field specified in query will be used as the index (if one exists). Queries are chained together criteria that applies to a set of fields:
Package testdeep allows extremely flexible deep comparison. It is built for testing. It is a go rewrite and adaptation of wonderful Test::Deep perl module. In golang, comparing data structure is usually done using reflect.DeepEqual or using a package that uses this function behind the scene. This function works very well, but it is not flexible. Both compared structures must match exactly. The purpose of go-testdeep is to do its best to introduce this missing flexibility using "operators" when the expected value (or one of its component) cannot be matched exactly. testdeep package should not be used in new code, even if it can for backward compatibility reasons, but td package. All variables and types of testdeep package are aliases to respectively functions and types of td package. They are only here for compatibility purpose as should now be used, in preference of older, but still supported: For easy HTTP API testing, see tdhttp package. For tests suites also just as easy, see tdsuite package.
package nodb is a high performance embedded NoSQL. nodb supports various data structure like kv, list, hash and zset like redis. Other features include binlog replication, data with a limited time-to-live. First create a nodb instance before use: cfg is a Config instance which contains configuration for nodb use, like DataDir (root directory for nodb working to store data). After you create a nodb instance, you can select a DB to store you data: DB must be selected by a index, nodb supports only 16 databases, so the index range is [0-15]. KV is the most basic nodb type like any other key-value database. List is simply lists of values, sorted by insertion order. You can push or pop value on the list head (left) or tail (right). Hash is a map between fields and values. ZSet is a sorted collections of values. Every member of zset is associated with score, a int64 value which used to sort, from smallest to greatest score. Members are unique, but score may be same. nodb supports binlog, so you can sync binlog to another server for replication. If you want to open binlog support, set UseBinLog to true in config.
This code is for loading database data that maps ip addresses to countries for collecting and presenting statistics on snowflake use that might alert us to censorship events. The functions here are heavily based off of how tor maintains and searches their geoip database The tables used for geoip data must be structured as follows: Recognized line format for IPv4 is: Note that the IPv4 line format is not currently supported. Recognized line format for IPv6 is: It also recognizes, and skips over, blank lines and lines that start with '#' (comments).
Golog aspires to be an ISO Prolog interpreter. It currently supports a small subset of the standard. Any deviations from the standard are bugs. Typical usage looks something like this: This sample highlights a few key aspects of using Golog. To start, Golog data structures are immutable. NewMachine() creates an empty Golog machine containing just the standard library. Consult() creates another new machine with some extra code loaded. The original, empty machine is untouched. It's common to build a large Golog machine during init() and then add extra rules to it at runtime. Because Golog machines are immutable, multiple goroutines can access, run and "modify" machines in parallel. This design also opens possibilities for and-parallel and or-parallel execution. Most methods, like Consult(), can accept Prolog code in several forms. The example above shows Prolog as a string. We could have used any io.Reader instead. Error handling is one oddity. Golog methods follow Go convention by returning an error value to indicate that something went wrong. However, in many cases the caller knows that an error is highly improbable and doesn't want extra code to deal with the common case. For most methods, Golog offers one with a trailing underscore, like ByName_(), which panics on error instead of returning an error value. See also:
Package redigomock is a mock for redigo library (redis client) Redigomock basically register the commands with the expected results in a internal global variable. When the command is executed via Conn interface, the mock will look to this global variable to retrieve the corresponding result. To start a mocked connection just do the following: Now you can inject it whenever your system needs a redigo.Conn because it satisfies all interface requirements. Before running your tests you need beyond of mocking the connection, registering the expected results. For that you can generate commands with the expected results. As the Expect method from Command receives anything (interface{}), another method was created to easy map the result to your structure. For that use ExpectMap: You should also test the error cases, and you can do it in the same way of a normal result. Sometimes you will want to register a command regardless the arguments, and you can do it with the method GenericCommand (mainly with the HMSET). All commands are registered in a global variable, so they will be there until all your test cases ends. So for good practice in test writing you should in the beginning of each test case clear the mock states. Let's see a full test example. Imagine a Person structure and a function that pick up this person in Redis using redigo library (file person.go): Now we need to test it, so let's create the corresponding test with redigomock (fileperson_test.go): When you use redis as a persistent list, then you might want to call the same redis command multiple times. For example: To test it, you can chain redis responses. Let's write a test case: In the first iteration of the loop redigomock would return "www.some.url.com", then "www.another.url.com" and finally redis.ErrNil. Sometimes providing expected arguments to redigomock at compile time could be too constraining. Let's imagine you use redis hash sets to store some data, along with the timestamp of the last data update. Let's expand our Person struct: And add a function updating personal data (phone number for example). Please notice that the update timestamp can't be determined at compile time: Unit test: As you can see at the position of current timestamp redigomock is told to match AnyInt struct created by NewAnyInt() method. AnyInt struct will match any integer passed to redigomock from the tested method. Please see fuzzyMatch.go file for more details. The interface of Conn which matches redigo.Conn is safe for concurrent use, but the mock-only methods and fields, like Command and Errors, should not be accessed concurrently with such calls.
Package duit is a pure go, cross-platform, MIT-licensed, UI toolkit for developers. The examples/ directory has small code examples for working with duit and its UIs. Examples are the recommended starting point. Start with NewDUI to create a DUI: essentially a window and all the UI state. The user interface consists of a hierarchy of "UIs" like Box, Scroll, Button, Label, etc. They are called UIs, after the interface UI they all implement. The zero structs for UIs have sane default behaviour so you only have to fill in the fields you need. UIs are kept/wrapped in a Kid, to track their layout/draw state. Use NewKids() to build up the UIs for your application. You won't see much of the Kid-types/functions otherwise, unless you implement a new UI. You are in charge of the main event loop, receiving mouse/keyboard/window events from the dui.Inputs channel, and typically passing them on unchanged to dui.Input. All callbacks and functions on UIs are called from inside dui.Input. From there you can also safely change the the UIs, no locking required. After changing a UI you are responsible for calling MarkLayout or MarkDraw to tell duit the UI needs a new layout or draw. This may sound like more work, but this tradeoff keeps the API small and easy to use. If you need to change the UI from a goroutine outside of the main loop, e.g. for blocking calls, you can send a function that makes those modifications on the dui.Call channel, which will be run on the main channel through dui.Inputs. After handling an input, duit will layout or draw as necessary, no need to render explicitly. Embedding a UI into your own data structure is often an easy way to build up UI hiearchies. Scroll and Edit show a scrollbar. Use button 1 on the scrollbar to scroll up, button 3 to scroll down. If you click more near the top, you scroll less. More near the bottom, more. Button 2 scrolls to the absolute place, where you clicked. Button 4 and 5 are wheel up and wheel down, and also scroll less/more depending on position in the UI.
Package dom provides Go bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already usable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behavior, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM.
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Gotalk is a complete muli-peer real-time messaging library. See https://github.com/rsms/gotalk#readme for a more in-depth explanation of what Gotalk is and what it can do for you. Most commonly Gotalk is used for rich web app development, as an alternative to HTTP APIs, when the web app mainly runs client-side rather than uses a traditional "request a new page" style. Here is an example of a minimal but fully functional web server with Gotalk over websocket: Here is a matching HTML document; a very basic web app: Gotalk can be thought of as being composed by four layers: You can make use of only some parts. For example you could write and read structured data in files using the message protocol and basic file I/O, or use the high-level request-response API with some custom transport.
Packages in the Plugin directory contain plugins for [goa v2](https://godoc.org/goa.design/goa/v3). Plugins can extend the goa DSL, generate new artifacts and modify the output of existing generators. There are currently two plugins in the directory: The [goakit](https://godoc.org/goa.design/plugins/goakit) plugin generates code that integrates with the [go-kit](https://github.com/go-kit/kit) library. The [cors](https://godoc.org/goa.design/plugins/cors) plugin adds new DSL to define CORS policies. The plugin generates HTTP server code that respond to CORS requests in compliance with the policies defined in the design. Writing a plugin consists of two steps: 1. Writing the functions that gets called by the goa tool during code generation. 2. Registering the functions with the goa tool. A plugin may implement the "gen" goa command, the "example" goa command or both. In each case a plugin may register one or two functions: the first function called "Prepare" gets called prior to any code generation actually happening. The function can modify the design data structures before goa uses them to generate code. The second function called "Generate" does the actual code generation. The signature of the Generate function is: where: The function must return the entire set of generated files (even the files that the plugin does not modify). The functions must then be registered with the goa code generator tool using the "RegisterPlugin" function of the "codegen" package. This is typically done in a package "init" function, for example: The first argument of RegisterPlugin must be one of "gen" or "example" and specifies the "goa" command that triggers the call to the plugin functions. The second argument is the Prepare function if any, nil otherwise. The last argument is the code generator function if any, nil otherwise. A plugin may introduce new DSL "keywords" (typically Go package functions). The DSL functions initialize the content of a design root object (an object that implements the "eval.Root" interface), for example: The [eval](https://godoc.org/goa.design/goa/v3/codegen/eval) package contains a number of functions that can be leveraged to implement the DSL such as error reporting functions. The plugin design root object is then given to the plugin Generate function (if there's one) which may take advantage of it to generate the code.
Ristretto is a fast, fixed size, in-memory cache with a dual focus on throughput and hit ratio performance. You can easily add Ristretto to an existing system and keep the most valuable data where you need it. This package includes multiple probabalistic data structures needed for admission/eviction metadata. Most are Counting Bloom Filter variations, but a caching-specific feature that is also required is a "freshness" mechanism, which basically serves as a "lifetime" process. This freshness mechanism was described in the original TinyLFU paper 1, but other mechanisms may be better suited for certain data distributions.
Fully persistent data structures. A persistent data structure is a data structure that always preserves the previous version of itself when it is modified. Such data structures are effectively immutable, as their operations do not update the structure in-place, but instead always yield a new structure. Persistent data structures typically share structure among themselves. This allows operations to avoid copying the entire data structure.
Package forest is a library for creating nodes in the Arbor Forest data structure. The specification for the Arbor Forest can be found here: https://github.com/arborchat/protocol/blob/forest/spec/Forest.md NOTE: this package requires using a fork of golang.org/x/crypto, and you must therefore include the following in your go.mod: All nodes in the Arbor Forest are cryptographically signed by an Identity node. Identity nodes sign themselves. To create a new identity, first create or load an OpenPGP private key using golang.org/x/crypto/openpgp. Then you can use that key and name to create an identity. Identities (and their private keys) can be used to create other nodes with the Builder type. You can create community nodes using a builder like so: Builders can also create reply nodes: The Builder type can also be used fluently like so:
Package kdtree implements a k-d tree data structure.
Package rpc is a fork of the stdlib net/rpc which is frozen. It adds support for context.Context on the client and server, including propogating cancellation. See the README at https://github.com/keegancsmith/rpc for motivation why this exists. The API is exactly the same, except Client.Call takes a context.Context, and Server methods are expected to take a context.Context as the first argument. The following is the original rpc godoc updated to include context.Context. Additionally the wire protocol is unchanged, so is backwards compatible with net/rpc clients. Package rpc provides access to the exported methods of an object across a network or other I/O connection. A server registers an object, making it visible as a service with the name of the type of the object. After registration, exported methods of the object will be accessible remotely. A server may register multiple objects (services) of different types but it is an error to register multiple objects of the same type. Only methods that satisfy these criteria will be made available for remote access; other methods will be ignored: In effect, the method must look schematically like where T1 and T2 can be marshaled by encoding/gob. These requirements apply even if a different codec is used. (In the future, these requirements may soften for custom codecs.) The method's second argument represents the arguments provided by the caller; the third argument represents the result parameters to be returned to the caller. The method's return value, if non-nil, is passed back as a string that the client sees as if created by errors.New. If an error is returned, the reply parameter will not be sent back to the client. The server may handle requests on a single connection by calling ServeConn. More typically it will create a network listener and call Accept or, for an HTTP listener, HandleHTTP and http.Serve. A client wishing to use the service establishes a connection and then invokes NewClient on the connection. The convenience function Dial (DialHTTP) performs both steps for a raw network connection (an HTTP connection). The resulting Client object has two methods, Call and Go, that specify the service and method to call, a pointer containing the arguments, and a pointer to receive the result parameters. The Call method waits for the remote call to complete while the Go method launches the call asynchronously and signals completion using the Call structure's Done channel. Unless an explicit codec is set up, package encoding/gob is used to transport the data. Here is a simple example. A server wishes to export an object of type Arith: The server calls (for HTTP service): At this point, clients can see a service "Arith" with methods "Arith.Multiply" and "Arith.Divide". To invoke one, a client first dials the server: Then it can make a remote call: or A server implementation will often provide a simple, type-safe wrapper for the client. The net/rpc package is frozen and is not accepting new features.
Package circllhist provides an implementation of Circonus' fixed log-linear histogram data structure. This allows tracking of histograms in a composable way such that accurate error can be reasoned about.
Package logfmt implements utilities to marshal and unmarshal data in the logfmt format. The logfmt format records key/value pairs in a way that balances readability for humans and simplicity of computer parsing. It is most commonly used as a more human friendly alternative to JSON for structured logging.
Package jsonvalue is for JSON parsing and setting. It is used in situations those Go structures cannot achieve, or "map[string]any" could not do properly. As a quick start: If you want to parse raw JSON data, use Unmarshal() jsonvalue 包用于 JSON 的解析(反序列化)和编码(序列化)。通常情况下我们用 struct 来处理 结构化的 JSON,但是有时候使用 struct 不方便或者是功能不足的时候,go 一般而言使用的是 "map[string]any",但是后者也有很多不方便的地方。本包即是用于替代这些不方便的情况的。 快速上手: 如果要反序列化原始的 JSON 文本,则使用 Unmarshal():
Package hamt provides a reference implementation of the IPLD HAMT used in the Filecoin blockchain. It includes some optional flexibility such that it may be used for other purposes outside of Filecoin. HAMT is a "hash array mapped trie" https://en.wikipedia.org/wiki/Hash_array_mapped_trie. This implementation extends the standard form by including buckets for the key/value pairs at storage leaves and CHAMP mutation semantics https://michael.steindorfer.name/publications/oopsla15.pdf. The CHAMP invariant and mutation rules provide us with the ability to maintain canonical forms given any set of keys and their values, regardless of insertion order and intermediate data insertion and deletion. Therefore, for any given set of keys and their values, a HAMT using the same parameters and CHAMP semantics, the root node should always produce the same content identifier (CID). The HAMT algorithm hashes incoming keys and uses incrementing subsections of that hash digest at each level of its tree structure to determine the placement of either the entry or a link to a child node of the tree. A `bitWidth` determines the number of bits of the hash to use for index calculation at each level of the tree such that the root node takes the first `bitWidth` bits of the hash to calculate an index and as we move lower in the tree, we move along the hash by `depth x bitWidth` bits. In this way, a sufficiently randomizing hash function will generate a hash that provides a new index at each level of the data structure. An index comprising `bitWidth` bits will generate index values of `[ 0, 2^bitWidth )`. So a `bitWidth` of 8 will generate indexes of 0 to 255 inclusive. Each node in the tree can therefore hold up to `2^bitWidth` elements of data, which we store in an array. In the this HAMT and the IPLD HashMap we store entries in buckets. A `Set(key, value)` mutation where the index generated at the root node for the hash of key denotes an array index that does not yet contain an entry, we create a new bucket and insert the key / value pair entry. In this way, a single node can theoretically hold up to `2^bitWidth x bucketSize` entries, where `bucketSize` is the maximum number of elements a bucket is allowed to contain ("collisions"). In practice, indexes do not distribute with perfect randomness so this maximum is theoretical. Entries stored in the node's buckets are stored in key-sorted order. This HAMT implementation: • Fixes the `bucketSize` to 3. • Defaults the `bitWidth` to 8, however within Filecoin it uses 5 • Defaults the hash algorithm to the 64-bit variant of Murmur3-x64 The algorithm used here is identical to that of the IPLD HashMap algorithm specified at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md. The specific parameters used by Filecoin and the DAG-CBOR block layout differ from the specification and are defined at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md#Appendix-Filecoin-hamt-variant.
Package hamt provides a reference implementation of the IPLD HAMT used in the Filecoin blockchain. It includes some optional flexibility such that it may be used for other purposes outside of Filecoin. HAMT is a "hash array mapped trie" https://en.wikipedia.org/wiki/Hash_array_mapped_trie. This implementation extends the standard form by including buckets for the key/value pairs at storage leaves and CHAMP mutation semantics https://michael.steindorfer.name/publications/oopsla15.pdf. The CHAMP invariant and mutation rules provide us with the ability to maintain canonical forms given any set of keys and their values, regardless of insertion order and intermediate data insertion and deletion. Therefore, for any given set of keys and their values, a HAMT using the same parameters and CHAMP semantics, the root node should always produce the same content identifier (CID). The HAMT algorithm hashes incoming keys and uses incrementing subsections of that hash digest at each level of its tree structure to determine the placement of either the entry or a link to a child node of the tree. A `bitWidth` determines the number of bits of the hash to use for index calculation at each level of the tree such that the root node takes the first `bitWidth` bits of the hash to calculate an index and as we move lower in the tree, we move along the hash by `depth x bitWidth` bits. In this way, a sufficiently randomizing hash function will generate a hash that provides a new index at each level of the data structure. An index comprising `bitWidth` bits will generate index values of `[ 0, 2^bitWidth )`. So a `bitWidth` of 8 will generate indexes of 0 to 255 inclusive. Each node in the tree can therefore hold up to `2^bitWidth` elements of data, which we store in an array. In the this HAMT and the IPLD HashMap we store entries in buckets. A `Set(key, value)` mutation where the index generated at the root node for the hash of key denotes an array index that does not yet contain an entry, we create a new bucket and insert the key / value pair entry. In this way, a single node can theoretically hold up to `2^bitWidth x bucketSize` entries, where `bucketSize` is the maximum number of elements a bucket is allowed to contain ("collisions"). In practice, indexes do not distribute with perfect randomness so this maximum is theoretical. Entries stored in the node's buckets are stored in key-sorted order. This HAMT implementation: • Fixes the `bucketSize` to 3. • Defaults the `bitWidth` to 8, however within Filecoin it uses 5 • Defaults the hash algorithm to the 64-bit variant of Murmur3-x64 The algorithm used here is identical to that of the IPLD HashMap algorithm specified at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md. The specific parameters used by Filecoin and the DAG-CBOR block layout differ from the specification and are defined at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md#Appendix-Filecoin-hamt-variant.
Package gi is the top-level repository for the GoGi GUI framework. All of the code is in the sub-packages within this repository: * gist: css-based styling settings, including Color * girl: rendering library, can be used standalone, SVG compliant * gi: the main 2D GUI Node, Widgets, and RenderWin * giv: more complex Views of Go data structures, supporting Model-View paradigm. * svg: full SVG rendering framework, used for Icons in gi. * gi3d: 3D rendering of a Scene within 2D windows -- full interactive 3D scenegraph. * histyle: text syntax-based highlighting styles -- used in textview.View * python: access all of GoGi from within Python using GoPy system.
Package bit provides a bit array implementation. A bit set, or bit array, is an efficient set data structure that consists of an array of 64-bit words. Because it uses bit-level parallelism, limits memory access, and efficiently uses the data cache, a bit set often outperforms other data structures. The Basics example shows how to create, combine, compare and print bit sets. Primes contains a short and simple, but still efficient, implementation of a prime number sieve. Union is a more advanced example demonstrating how to build an efficient variadic Union function using the SetOr method. Create, combine, compare and print bit sets. Create the set of all primes less than n in O(n log log n) time. Try the code with n equal to a few hundred millions and be pleasantly surprised. Implement an efficient variadic Union function using SetOr.
Package tdigest provides a highly accurate mergeable data-structure for quantile estimation. Typical T-Digest use cases involve accumulating metrics on several distinct nodes of a cluster and then merging them together to get a system-wide quantile overview. Things such as: sensory data from IoT devices, quantiles over enormous document datasets (think ElasticSearch), performance metrics for distributed systems, etc. After you create (and configure, if desired) the digest: You can then use it for registering measurements: Estimating quantiles: And merging with another digest:
Package restruct implements packing and unpacking of raw binary formats. Structures can be created with struct tags annotating the on-disk or in-memory layout of the structure, using the "struct" struct tag, like so: To unpack data in memory to this structure, simply use Unpack with a byte slice:
Package dag implements a directed acyclic graph data structure ( DAG ), a finite directed graph with no directed cycles. A DAG consists of finitely many vertices and edges, with each edge directed from one vertex to another, such that there is no way to start at any vertex v and follow a consitently-sequence of edges that eventually loops backto v again. A DAG is a directex graph that has a topological order, a sequence of vertices such that every edge is directed from earlier to later in the sequence. Example
Package iotanalytics provides the API client, operations, and parameter types for AWS IoT Analytics. IoT Analytics allows you to collect large amounts of device data, process messages, and store them. You can then query the data and run sophisticated analytics on it. IoT Analytics enables advanced data exploration through integration with Jupyter Notebooks and data visualization through integration with Amazon QuickSight. Traditional analytics and business intelligence tools are designed to process structured data. IoT data often comes from devices that record noisy processes (such as temperature, motion, or sound). As a result the data from these devices can have significant gaps, corrupted messages, and false readings that must be cleaned up before analysis can occur. Also, IoT data is often only meaningful in the context of other data from external sources. IoT Analytics automates the steps required to analyze data from IoT devices. IoT Analytics filters, transforms, and enriches IoT data before storing it in a time-series data store for analysis. You can set up the service to collect only the data you need from your devices, apply mathematical transforms to process the data, and enrich the data with device-specific metadata such as device type and location before storing it. Then, you can analyze your data by running queries using the built-in SQL query engine, or perform more complex analytics and machine learning inference. IoT Analytics includes pre-built models for common IoT use cases so you can answer questions like which devices are about to fail or which customers are at risk of abandoning their wearable devices.
Package workdocs provides the API client, operations, and parameter types for Amazon WorkDocs. The Amazon WorkDocs API is designed for the following use cases: File Migration: File migration applications are supported for users who want to migrate their files from an on-premises or off-premises file system or service. Users can insert files into a user directory structure, as well as allow for basic metadata changes, such as modifications to the permissions of files. Security: Support security applications are supported for users who have additional security needs, such as antivirus or data loss prevention. The API actions, along with CloudTrail, allow these applications to detect when changes occur in Amazon WorkDocs. Then, the application can take the necessary actions and replace the target file. If the target file violates the policy, the application can also choose to email the user. eDiscovery/Analytics: General administrative applications are supported, such as eDiscovery and analytics. These applications can choose to mimic or record the actions in an Amazon WorkDocs site, along with CloudTrail, to replicate data for eDiscovery, backup, or analytical applications. All Amazon WorkDocs API actions are Amazon authenticated and certificate-signed. They not only require the use of the Amazon Web Services SDK, but also allow for the exclusive use of IAM users and roles to help facilitate access, trust, and permission policies. By creating a role and allowing an IAM user to access the Amazon WorkDocs site, the IAM user gains full administrative visibility into the entire Amazon WorkDocs site (or as set in the IAM policy). This includes, but is not limited to, the ability to modify file permissions and upload any file to any user. This allows developers to perform the three use cases above, as well as give users the ability to grant access on a selective basis using the IAM model. The pricing for Amazon WorkDocs APIs varies depending on the API call type for these actions: READ (Get*) WRITE (Activate*, Add*, Create*, Deactivate*, Initiate*, Update*) LIST (Describe*) DELETE*, CANCEL For information about Amazon WorkDocs API pricing, see Amazon WorkDocs Pricing.
Package asn1 implements encoding and decoding of ASN.1 data structures using both Basic Encoding Rules (BER) or its subset, the Distinguished Encoding Rules (BER). This package is highly inspired by the Go standard package "encoding/asn1" while supporting additional features such as BER encoding and decoding and ASN.1 CHOICE types. By default and for convenience the package uses DER for encoding and BER for decoding. However it's possible to use a Context object to set the desired encoding and decoding rules as well other options. Restrictions: - BER allows STRING types, such as OCTET STRING and BIT STRING, to be encoded as constructed types containing inner elements that should be concatenated to form the complete string. The package does not support that, but in the future decoding of constructed strings should be included.
Package go-succinct-data-structure-trie implements trie with succinct data structure in Go.
Package avro provides encoding and decoding for the Avro binary data format. The format uses out-of-band schemas to determine the encoding, with a schema migration scheme to allow data written with one schema to be read using another schema. See here for more information on the format: https://avro.apache.org/docs/1.9.1/spec.html This package provides a mapping from regular Go types to Avro schemas. See the TypeOf function for more details. There is also a code generation tool that can generate Go data structures from Avro schemas. See https://pkg.go.dev/github.com/heetch/avro/cmd/avrogo for details.
Package duplo provides tools to efficiently query large sets of images for visual duplicates. The technique is based on the paper "Fast Multiresolution Image Querying" by Charles E. Jacobs, Adam Finkelstein, and David H. Salesin, with a few modifications and additions, such as the addition of a width to height ratio, the dHash metric by Dr. Neal Krawetz as well as some histogram-based metrics. Quering the data structure will return a list of potential matches, sorted by the score described in the main paper. The user can make searching for duplicates stricter, however, by filtering based on the additional metrics. Package example.
Package geoip2 provides an easy-to-use API for the MaxMind GeoIP2 and GeoLite2 databases; this package does not support GeoIP Legacy databases. The structs provided by this package match the internal structure of the data in the MaxMind databases. See github.com/oschwald/maxminddb-golang for more advanced used cases. Example provides a basic example of using the API. Use of the Country method is analogous to that of the City method.
Package rtcp implements encoding and decoding of RTCP packets according to RFC 3550. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package starr is a library for interacting with the APIs in Radarr, Lidarr, Sonarr and Readarr. It consists of the main starr package and one sub package for each starr application. In the basic use, you create a starr Config that contains an API key and an App URL. Pass this into one of the other packages (like radarr), and it's used as an interface to make API calls. You can either call starr.New() to build an http.Client for you, or create a starr.Config that contains one you control. If you pass a starr.Config into a sub package without an http Client, it will be created for you. There are a lot of option to set this code up from simple and easy to more advanced. The sub package contain methods and data structures for a number of API endpoints. Each app has somewhere between 50 and 100 API endpoints. This library currently covers about 10% of those. You can retrieve things like movies, albums, series and books. You can retrieve server status, authors, artists and items in queues. You can also add new media to each application with this library.
Package gcnotifier provides a way to receive notifications after every garbage collection (GC) cycle. This can be useful, in long-running programs, to instruct your code to free additional memory resources that you may be using. A common use case for this is when you have custom data structures (e.g. buffers, caches, rings, trees, pools, ...): instead of setting a maximum size to your data structure you can leave it unbounded and then drop all (or some) of the allocated-but-unused slots after every GC run (e.g. sync.Pool drops all allocated-but-unused objects in the pool during GC). To minimize the load on the GC the code that runs after receiving the notification should try to avoid allocations as much as possible, or at the very least make sure that the amount of new memory allocated is significantly smaller than the amount of memory that has been "freed" in response to the notification. GCNotifier guarantees to send a notification after every GC cycle completes. Note that the Go runtime does not guarantee that the GC will run: specifically there is no guarantee that a GC will run before the program terminates. Example implements a simple time-based buffering io.Writer: data sent over dataCh is buffered for up to 100ms, then flushed out in a single call to out.Write and the buffer is reused. If GC runs, the buffer is flushed and then discarded so that it can be collected during the next GC run. The example is necessarily simplistic, a real implementation would be more refined (e.g. on GC flush or resize the buffer based on a threshold, perform asynchronous flushes, properly signal completions and propagate errors, adaptively preallocate the buffer based on the previous capacity, etc.)
Grohl is an opinionated library for gathering metrics and data about how your applications are running in production. It does this through writing logs in a key=value structure. It also provides interfaces for sending stacktraces or metrics to external services. This is a Go version of https://github.com/asenchi/scrolls. The name for this library came from mashing the words "go" and "scrolls" together. Also, Dave Grohl (lead singer of Foo Fighters) is passionate about event driven metrics. Grohl treats logs as the central authority for how an application is behaving. Logs are written in a key=value structure so that they are easily parsed. If you use a set of common log keys, you can relate events from various services together. Here's an example log that you might write: The output would look something like: Note: Other examples leave out the "now" keyword for clarity. A *grohl.Context stores a map of keys and values that are written with every log message. You can set common keys for every request, or create a new context per new request or connection. You can add more context to the example above by setting up the app name and deployed environment. This changes the output from above to: You can also create scoped Context objects. For instance, a network server may want a scoped Context for each request or connection. This is the output (taking the global context above into consideration): As you can see we have some standard nomenclature around logging. Here's a cheat sheet for some of the methods we use: By default, all *grohl.Context objects write to STDOUT. Grohl includes support for both io and channel loggers. If you are writing to *grohl.Context objects in separate go routines, a channel logger can be used for concurrency. Grohl provides a grohl.Statter interface based on https://github.com/peterbourgon/g2s: Without any setup, this outputs: If you import "github.com/peterbourgon/g2s", you can dial into a statsd server over a udp socket: Once being set up, the statter functions above will not output any logs. Grohl makes it easy to measure the run time of a portion of code. This would output: You can change the time unit that Grohl uses to "milliseconds" (the default is "seconds"): You can also write to a custom Statter: You can also set all *grohl.Timer objects to use the same statter. Grohl can report Go errors: Without any ErrorReporter set, this logs the following: You can set the default ErrorReporter too:
Package blocks contains the lowest level of IPLD data structures. A block is raw data accompanied by a CID. The CID contains the multihash corresponding to the block.
Package retag provides an ability to change tags of structures' fields in runtime without copying of the data. It may be helpful in next cases: Features: The package requires go1.7+. The package is still experimental and subject to change. The package can be broken by a next release of go.
Package btcec implements support for the elliptic curves needed for litecoin. Litecoin uses elliptic curve cryptography using koblitz curves (specifically secp256k1) for cryptographic functions. See http://www.secg.org/collateral/sec2_final.pdf for details on the standard. This package provides the data structures and functions implementing the crypto/elliptic Curve interface in order to permit using these curves with the standard crypto/ecdsa package provided with go. Helper functionality is provided to parse signatures and public keys from standard formats. It was designed for use with ltcd, but should be general enough for other uses of elliptic curve crypto. It was originally based on some initial work by ThePiachu, but has significantly diverged since then.
Package btcec implements support for the elliptic curves needed for bitcoin. Bitcoin uses elliptic curve cryptography using koblitz curves (specifically secp256k1) for cryptographic functions. See http://www.secg.org/collateral/sec2_final.pdf for details on the standard. This package provides the data structures and functions implementing the crypto/elliptic Curve interface in order to permit using these curves with the standard crypto/ecdsa package provided with go. Helper functionality is provided to parse signatures and public keys from standard formats. It was designed for use with btcd, but should be general enough for other uses of elliptic curve crypto. It was originally based on some initial work by ThePiachu, but has significantly diverged since then.
Package ldclient is the main package for the LaunchDarkly SDK. This package contains the types and methods for the SDK client (LDClient) and its overall configuration. Subpackages in the same repository provide additional functionality for specific features of the client. Most applications that need to change any configuration settings will use the ldcomponents package (https://pkg.go.dev/gopkg.in/launchdarkly/go-server-sdk.v5/ldcomponents). The SDK also uses types from the go-sdk-common.v2 repository and its subpackages (https://pkg.go.dev/gopkg.in/launchdarkly/go-sdk-common.v2) that represent standard data structures in the LaunchDarkly model. All applications that evaluate feature flags will use the lduser package (https://pkg.go.dev/gopkg.in/launchdarkly/go-sdk-common.v2/lduser); for some features such as custom attributes, the ldvalue package is also helpful. For more information and code examples, see the Go SDK Reference: https://docs.launchdarkly.com/sdk/server-side/go
Package tiff implements structures and functionality for working with TIFF data structures. References:
Package tdigest provides a highly accurate mergeable data-structure for quantile estimation. Typical T-Digest use cases involve accumulating metrics on several distinct nodes of a cluster and then merging them together to get a system-wide quantile overview. Things such as: sensory data from IoT devices, quantiles over enormous document datasets (think ElasticSearch), performance metrics for distributed systems, etc. After you create (and configure, if desired) the digest: You can then use it for registering measurements: Estimating quantiles: And merging with another digest:
Package pq implements a priority queue data structure on top of container/heap. As an addition to regular operations, it allows an update of an items priority, allowing the queue to be used in graph search algorithms like Dijkstra's algorithm. Computational complexities of operations are mainly determined by container/heap. In addition, a map of items is maintained, allowing O(1) lookup needed for priority updates, which themselves are O(log n).