Package trie presents a simple, clean, black and white radix trie interface. For more information on what kind of data structure a radix trie is, please see http://en.wikipedia.org/wiki/Radix_tree. Two types of data structures are available. A black and white only tree (default) which merely records the existence of keys added to the data structure, and key-value black and white trie which stores an arbitrary value in the trie attached to the trie. Because the value is stored seperately from the existence of the key you can store nil values in the trie and be able to tell that apart from a key that does not exist. The data structures inside this package are NOT synchronized, you'll want to add a sync.Mutex or sync.RWMutex to your code if it needs to be thread safe. At this point in time I do consider the structures to be *mostly* safe to use concurrently without locking as long as you're willing to end up in a part of the trie that existed when you started making your call but not by the time the call has ended. For more read operations this is "OK" (please know the gaurantees required by your application) but it is absolutely not OK for write operations and you should absolutely synchronize those or risk fairly massive corruption. Example BW Trie usage
Package trie implements trie tree (or prefix tree) data structure useful for things like prefix search/autocompletion. For now, it supports Insert, HasWord, HasPrefix and WordsByPrefix methods.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Implementation of an R-Way Trie data structure. A Trie has a root Node which is the base of the tree. Each subsequent Node has a letter and children, which are nodes that have letter values associated with them.
Package geyser provides HTTP clients for Steam API and Steam API-like endpoints. NOTE: To avoid confusion between Go terminology and abstract API entities (terms like "interface" and "method"), API entity names are expressed using capitalized words. Concrete Go names and statements are surrounded by backticks. Steam APIs are structured as trees, with the base endpoint being the root node. The root node has a set of Interface nodes as children. Interfaces can be parameterized by AppID, so one Interface with 2 different versions results in 2 different nodes. Each Interface has a collection of Method leaf nodes. Methods can be versioned, so a Method with 2 different versions results in 2 different nodes, and can contain a set of Parameters (HTTP request parameters). Schemas and client code are generated automatically by sub-package "apigen". NOTE: A non-partner api key is used to fetch the API schema, so only non-partner Interfaces are generated. Types in sub-package "schema" represent the tree structure of an API schema: A schema starts with a root `Schema` node that contains API information and an `Interfaces` collection. `Interfaces` is a collection of `Interface` nodes. It also provides helper methods to group `Interface`s by name, extract AppIDs and so on. `Interface` holds the specificiation of the Interface and a `Methods` collection. `Methods` is a collection of `Method` nodes and provides helpers methods to group `Method`s by name, extract versions and so on. `Method` represents the specification of an Interface Method. It also contains a collection of parameters `MethodParams`. `MethodParams` is a collection of `MethodParam`. It also provides helpers for parameter validation. `MethodParam` holds the specification of an Interface Method Parameter. The specification for each Interface is exposed through `Schema<InterfaceName>` public variables. These can also be used direcly by users for any other purpose, including instantiating individual interface structs directly. Node types provide JSON tags to encode/decode to/from JSON format. All of the collection types provide JSON encoding/decoding methods. A root `Schema` node is then well-suited to encode/decode API schemas to/from JSON format (i.e., when consuming "ISteamWebAPIUtil/GetSupportedAPIList"). Interfaces are grouped by their base name, with the AppID (if present) removed. For example: all "IGCVersion_<ID>" Interfaces are grouped into a single struct named "IGCVersion" that is instantiated by passing the AppID as parameter (`NewIGCVersion(client, appID)` or `client.IGCVersion(appID)`). Interfaces that don't have an AppID do not require an AppID parameter (e.g.: "ISteamApps" -> `NewISteamApps(client)` or `client.NewISteamApps()`). Interfaces that have only one AppID still require an AppID parameter. The same grouping is done for Interface Methods. All Methods with the same name but different versions are grouped by name and the version is parameterized (`iface.MethodName(version)`). Methods that have a single version do not require a version parameter (`iface.MethodName()`). The workflow to perform an HTTP request to an Interface Method is: Client is the root element. Interfaces are instantiated with methods in the `Client` struct (possibly requiring an AppID parameter). Interface Methods are struct methods of the Interface struct (possibly requiring a version parameter). Parameters to the request must be set in the `Request` configuration step (see `Request.SetOptions`). Parameters are validated automatically conforming the Interface Method specification and the request execution will return an error if the validation failed. The meaning and functionality of each Parameter for all Interface Methods is not well known, neither are they well documented (here, in official documentation or anywhere else). The user will have to do some experimenting and research to figure it out. That said, whenever Parameters are known, there may exist manually implemented helper functions to generate values (either query parameters or form data, as `net/url.Values`). These helper functions are named by concatenating the Interface struct name, the Method name and either "Params" or "FormData", depending on the HTTP method (e.g.: "ISteamRemoteStorage/GetCollectionDetails" -> "ISteamRemoteStorageGetCollectionDetailsFormData"). Results of requests can be obtained in two ways: parsing the response body manually, or configuring the result object in the `Request` before executing. Requests are always sent using `format=json`, so response bodies are (or seem to be) always in JSON format. For manual parsing, checking "Content-Type" response header is recommended. When setting the result object in the `Request` (see `Request.SetResult`), a JSON-unmarshable object is expected. Given that each response has a specific format and result structs cannot be automatically generated, decoding of a specific response is left to the user. When a response format is known, there will be a manually implemented struct with the appropriate fields. These result structs are named by concatenating the Interface struct name and the Method name (e.g.: "ISteamApps/GetAppList" -> "ISteamAppsGetAppList"). Look for these result structs for each Interface Method to see if there's any available at the time. Contributions to implement proper result structs are welcome! Since not all similarly named Interfaces with different AppIDs are necessarily identical, the Interface/Method grouping can result in generated Interface struct methods that are not present in a given Interface. For example, given a (pseudo) schema: the resulting (pseudo) structure would be: GetName and GetAge are generated struct methods of IUsers, regardless of AppID and version. Accessing `IUsers(100).GetName(1)`, `IUsers(100).GetName(2)`, `IUsers(200).GetName(2)` and `IUsers(200).GetAge()` is valid. But accessing `IUsers(100).GetAge()` and `IUsers(200).GetName(1)` is invalid (returns error). APIs that are available on different hosts and have different schemas are generated under self-contained sub-packages. These sub-packages will contain their own `Client` struct that are thin wrappers around the base `Client`. Currently, these are available: Steam Web API documentation can be found at: https://partner.steamgames.com/doc/webapi_overview https://developer.valvesoftware.com/wiki/Steam_Web_API https://wiki.teamfortress.com/wiki/WebAPI There are several Interfaces and Methods not present in the official API schema (returned by "ISteamWebAPIUtil/GetSupportedAPIList"), including game-specific schemas. These undocumented Interfaces are fetched from third-party sources and are also generated along with the official one. Specifications of undocumented Methods don't define any Parameters, so no validation is performed or any documentation is generated. More importantly, they also don't define the HTTP method to be used. For now, these Methods default to GET HTTP method. When an Interface or Method originates from an undocumented source, they'll have a comment indicating it. A comprehensive list of Interfaces and Methods, documented and undocumented, can be found at https://steamapi.xpaw.me
Package merkletree implements a Merkle Tree capable of storing arbitrary content. A Merkle Tree is a hash tree that provides an efficient way to verify the contents of a set data are present and untampered with. At its core, a Merkle Tree is a list of items representing the data that should be verified. Each of these items is inserted into a leaf node and a tree of hashes is constructed bottom up using a hash of the nodes left and right children's hashes. This means that the root node will effictively be a hash of all other nodes (hashes) in the tree. This property allows the tree to be reproduced and thus verified by on the hash of the root node of the tree. The benefit of the tree structure is verifying any single content entry in the tree will require only nlog2(n) steps in the worst case. Creating a new merkletree requires that the type that the tree will be constructed from implements the Content interface. A slice of the Content items should be created and then passed to the NewTree method. t represents the Merkle Tree and can be verified and manipulated with the API methods described below.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package gographviz provides parsing for the DOT grammar into an abstract syntax tree representing a graph, analysis of the abstract syntax tree into a more usable structure, and writing back of this structure into the DOT format.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Package html implements an HTML5-compliant tokenizer and parser. Tokenization is done by creating a Tokenizer for an io.Reader r. It is the caller's responsibility to ensure that r provides UTF-8 encoded HTML. Given a Tokenizer z, the HTML is tokenized by repeatedly calling z.Next(), which parses the next token and returns its type, or an error: There are two APIs for retrieving the current token. The high-level API is to call Token; the low-level API is to call Text or TagName / TagAttr. Both APIs allow optionally calling Raw after Next but before Token, Text, TagName, or TagAttr. In EBNF notation, the valid call sequence per token is: Token returns an independent data structure that completely describes a token. Entities (such as "<") are unescaped, tag names and attribute keys are lower-cased, and attributes are collected into a []Attribute. For example: The low-level API performs fewer allocations and copies, but the contents of the []byte values returned by Text, TagName and TagAttr may change on the next call to Next. For example, to extract an HTML page's anchor text: Parsing is done by calling Parse with an io.Reader, which returns the root of the parse tree (the document element) as a *Node. It is the caller's responsibility to ensure that the Reader provides UTF-8 encoded HTML. For example, to process each anchor node in depth-first order: The relevant specifications include: https://html.spec.whatwg.org/multipage/syntax.html and https://html.spec.whatwg.org/multipage/syntax.html#tokenization
Package hamt is just a trivial front door to the hamt32 and hamt64 packages which really contain the HAMT implementations. Those HAMT implementations are identical in every way but the size of the computed hash, called Hashval. Those are either uint32 or uint64 values for hamt32 and hamt64 respectively. This package merely implements New(), New32() and New64() functions and the table option constants FixedTables, SparseTables, HybridTables, and the map TableOptionName (eg. hamt.TableOptionName[hamt.FixedTables] == "FixedTables"). There are several choices to make: Hashval hamt32 versus hamt64, FixedTables versus SparseTables versus HybridTables, and Functional versus Transient. Then there is a hidden choice; you can change the source code constant, NumIndexBits, to a value other than the current setting of 5. The New() function makes all the recommended choices for you. That is it uses the 64 bit hashVal (aka hamt64), functional behavior, and hybrid tables. Just use hamt64. I implemnted both before I really understood HAMT. I was conflating 32 bit hash values with 32 wide branching factor (that was just a feature of the other implmentations I was looking at). While 32bit FNV hash values are still pretty random I have seen plenty of collisions in my tests. I have never seen 64bit FNV hash values collide and in the current state of computing having 64bit CPUs as the norm. I recommend using hamt64. If you are on 32bit CPUs then maybe you could choose hamt32. Tables is the name I use to for the interior (aka non-leaf) nodes of the hamt data structure. Those tables being the indexed arrays from the Hash-indexed Array Mapped Tries (HAMT) name. This is the classic speed versus memory choice with a twist. The facts to consider are: The tree is indexed by essentially random values (the parts of the hash value of the key), so the tree is going to be "balanced" to a statistical likelihood. The inner branching nodes will be very densely populated, and the outer branching nodes will be very sparsely populated. FixedTables are fastest to access and modify, because it is a simple matter of getting and setting preallocated fixed sized arrays. However, they will be wasting most of their allocated space most of the time. SparseTables are slowest because we always have to calculate bit counts of bit arrays for get or set type operations. Further, inserting or removing values is a matter of manipulating slice values. On the other hand, given the way slice memory allocation works, they will usually waste less than half their allocated memory. According to tests, HybridTables setting behaves precisely the way we want it to behave. For a test set of data with 3,149,824 KeyVal pairs, he distribution of tables comes in two groups: tables with 25-32 entries and tables with 1-11 entries. There are no tables outside those two groupings. The 25-32 entry tables are all fixed tables and the 1-11 entry tables are all sparse tables. Of the sparse tables %40.1 have 1 or 2 entries, %85.4 have 4 or less and %99.7 have 8 or less entries. Given sparse tables start at capacity of 2 and capacity grows by doubling, the sparse tables are efficiently packed. The conclusion from this test data is that HybridTables setting is an excellent fit for memory efficiency. Benchmarks show that fixed table hamts are slower than hybrid table hamts for Put operations and comparable for Get & Del operations. I conclude that is due to the massive over use of memory in fixed tables. This conclusion is partly due to the fact that the Put operation differntial between fixed and hybrid table hamts is twice as large for functional versus transient hamt behavior. Clearly HybridTables table option for my HAMT data structure is the best choice. The bottom line is that writing to transient behavior in a multiple threads almost guarantees problems unless you implement a locking solution (and that can be hard to do in a performant manner). On the other hand, given that HamtFunctional behavior returns a new HamtFunctional data structure upon any modification, HamtFunctional data structures are inherently thread safe. On your third hand, the copy-on-write strategy of HamtFunctional is inherently slower than modify-in-place strategy of HamtTransient. How much slower? For large hamt data structures (~3 million key/value pairs) the transient Put operation takes ~1000ns, where the functional Put op takes ~3200ns. Which really isn't that bad because they are within the same order of magnitude and it is already fast. Using the transient behavior begs the question: why not use the Go builtin map? Of course, the reason is obvious if your key must be a slice; Go map keys can not be slices. The ability to implement a reasonably efficient functional behavior for HAMTs is the point of this library. The hamt transient speed is definitely is slower than Go's builtin map: 435 vs 130 ns/Get; 950 vs 235 ns/Put; 900 vs 175 ns/Del; 235 vs 23 ns/KeyVal iterate. The point of this library is to implement the functional map-like behavior, so that is what I assume you will use it for. The transient behavior is useful for faster single threaded bulk operations then transform it back to the functional behavior. Both hamt32 and hamt64 have a constant NumIndexBits which determines all the other constants defining the HAMT structures. For both hamt32 and hamt64, the NumIndexBits constant is set to 5. You can manually change the source code to set NumIndexBits to some uint other than 5. IndexBits is set to 5 because that is how other people do it. NumIndexBits determines the branching factor (IndexLimit) and the depth (DepthLimit) of the HAMT data structure. Given IndexBits=5 IndexLimit=32, and DepthLimit=6 for hamt32 and DepthLimit=12 for hamt64.
Package kdtree implements a k-d tree data structure.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Implementation of an R-Way Trie data structure. A Trie has a root Node which is the base of the tree. Each subsequent Node has a letter and children, which are nodes that have letter values associated with them.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values.
Multicopy is a multi-threaded URL retriever. You provide provide login credentials to an instance of Salas Classic. Multicopy walks the directory tree in the images and files repository and saves files to disk. Files are stored in the same structure on disk as they appear in the repository. Installation: go get github.com/salsalabs/multicopy go install github.com/salsalabs/multicopy Execution: multicopy --login [YAML file] --dir [DIR] Help: multicopy --help
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Package treenode provides a set of functions for working with the https://leetcode.com/ binary tree structure TreeNode
Package splaytree implements the splay tree data structure, which is a self-balancing binary search tree, that is, it adapts its internal tree structure to how it is being used in order to optimize future operations. The tree contains only unique keys in sorted order. The self-balancing (called splaying) is done for every insert, lookup or remove operation. Splaying is heavily optimized in a single loop. The key which is inserted/looked up/removed is splayed upwards to the root, by means of rotations over two or three nodes. The effect is that future accesses to this key and to its neighbors become cheap, as they will be at or near the root of the tree. Accesses to other non-neighboring keys diminish this benefit over time. On average the cost of accesses is optimal: O(log N). Splay trees are especially beneficial in applications which exhibit locality of reference. I.e. when accesses to the tree are related in location or time. This happens for instance for sequential (sorted) or clustered access patterns. See https://en.wikipedia.org/wiki/Splay_tree for details. Package splaytree defines several iterators for SplayTree. Splay tree iterators traverse the tree in sorted order. For each call to the iterator the current item is returned and the iterator advances to the next tree node. When using iterators one must observe two rules: (1) One must not call any other tree operations which internally cause splaying, like insert, remove or delete. This would mess up the internal state of the iterator. If such an access must be done then abandon the iterator. (2) When one abandons (stops using) an iterator before it has returned nil, in order to preserve optimal theoretical properties, one must do a tree lookup on the last returned item. The iterator examples illustrate this.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Package gographviz provides parsing for the DOT grammar into an abstract syntax tree representing a graph, analysis of the abstract syntax tree into a more usable structure, and writing back of this structure into the DOT format.
Package merkletree implements a Merkle Tree capable of storing arbitrary content. A Merkle Tree is a hash tree that provides an efficient way to verify the contents of a set data are present and untampered with. At its core, a Merkle Tree is a list of items representing the data that should be verified. Each of these items is inserted into a leaf node and a tree of hashes is constructed bottom up using a hash of the nodes left and right children's hashes. This means that the root node will effictively be a hash of all other nodes (hashes) in the tree. This property allows the tree to be reproduced and thus verified by on the hash of the root node of the tree. The benefit of the tree structure is verifying any single content entry in the tree will require only nlog2(n) steps in the worst case. Creating a new merkletree requires that the type that the tree will be constructed from implements the Content interface. A slice of the Content items should be created and then passed to the NewTree method. t represents the Merkle Tree and can be verified and manipulated with the API methods described below.
Package merkletree implements a Merkle Tree capable of storing arbitrary content. A Merkle Tree is a hash tree that provides an efficient way to verify the contents of a set data are present and untampered with. At its core, a Merkle Tree is a list of items representing the data that should be verified. Each of these items is inserted into a leaf node and a tree of hashes is constructed bottom up using a hash of the nodes left and right children's hashes. This means that the root node will effictively be a hash of all other nodes (hashes) in the tree. This property allows the tree to be reproduced and thus verified by on the hash of the root node of the tree. The benefit of the tree structure is verifying any single content entry in the tree will require only nlog2(n) steps in the worst case. Creating a new merkletree requires that the type that the tree will be constructed from implements the Content interface. A slice of the Content items should be created and then passed to the NewTree method. t represents the Merkle Tree and can be verified and manipulated with the API methods described below.
Package cm contains generic "complicated maps": multi-level maps, dual-key maps, and maps containing sets. This package provides no locking in the datastructures. All locking is the responsibility of code using these maps. This code panics analogously to normal map behaviors. When there is no existing map behavior to guide, it tries to match the same logic Go normally uses. This is justified because these are just wrappers around maps, rather than independent data structures. Most or all of the places where this library panics is places where the code was going to panic anyhow; the panic calls in the code simply offer more direct guidance on the problem rather than panicking deep in library code. Similarly, none of these data structures are thread-safe on their own, just like conventional Go maps. Multi-level maps are maps that have other maps as their values. This provides convenience functions for interacting with multi-level maps. It is intended to work harmoniously with golang.org/x/maps, and tries not to replicate any functionality already found there. For instance, to get the first level of keys of these maps, simply pass them as normal maps to maps.Keys. The internal maps are exported so normal map operations work, so redundant operations already provided by range and such are not implemented. It is safe to write to these maps directly, no constraints maintained by this code will be violated. The delete methods provided by the multi-level maps will also clean up any higher-level maps left emptied by a delete. Directly executing deletes on the lower-level maps yourself will not automatically clean these maps up, which may also cause spurious keys to appear in the KeySlice method. Otherwise it is safe too. In theory, you can drop this into any existing multilevel map you already have, and they should continue to work, give or take any type conversions as you pass them around. You just also have the additional methods added by this type. Unlike single level maps where a sequence of the key values is the only sensible representation of the keys, multi-level maps have more than one useful representation. You can either look at the set of keys as a set of tuples for all levels, or you can look at them as a tree. Each representation has its costs and benefits, so this package provides both. As multilevel maps are just Go maps under the hood, they scale the same as Go maps do in general. A dual-keyed map is a map that allows you to lookup values by either of the two keys. Under the hood, it is simply both possible maps, and functions for setting and deleting by both keys. For your convenience, the two maps are left exported so you can efficiently read from them. Writing directly to them will violate the guarantees provided by this implementation and should generally not be done. Values are stored as given in both maps. This means that a dual-keyed map consumes twice the resources of a normal map. This is targeted for cases where a dual-key map is very convenient, but not large by modern standards. As you scale up needs like this you eventually need a database. Because this simply stores the maps in both directions, you may want to double-check before using pointer types for either type. It is legal in Go to use pointers to key maps, but it may not have the desired or expected result, as it will result in one of the two directions keying off of object identity rather than value. This has its uses too, though. For dual-key maps, it is obvious how to store them, with a reasonable penalty. As you get into needs for three or more keys, the cost of this technique multiplies resource consumption by the number of permutations of the keys, which by three keys is already six times a single map. So this package stops at dual-level maps. A MapSet is a map, whose value is a set. Several convenience functions can be implemented for manipulating such values. As a consequence of offering this functionality, this package also provides a Set implementation. Each of these structures implements the ability to get data structures representing the set of all keys, or keys and values in the set, as a single static data structure. It is an anti-pattern to use them as such: This causes the needless instantiation of a data structure in memory. This should be written as Normal use of KeySlice and KeyTree would be sorting it somehow before iterating, or possibly serializing them somewhere.
Package sops manages JSON, YAML and BINARY documents to be encrypted or decrypted. This package should not be used directly. Instead, Sops users should install the command line client via `go get -u go.mozilla.org/sops/v3/cmd/sops`, or use the decryption helper provided at `go.mozilla.org/sops/v3/decrypt`. We do not guarantee API stability for any package other than `go.mozilla.org/sops/v3/decrypt`. A Sops document is a Tree composed of a data branch with arbitrary key/value pairs and a metadata branch with encryption and integrity information. In JSON and YAML formats, the structure of the cleartext tree is preserved, keys are stored in cleartext and only values are encrypted. Keeping the values in cleartext provides better readability when storing Sops documents in version controls, and allows for merging competing changes on documents. This is a major difference between Sops and other encryption tools that store documents as encrypted blobs. In BINARY format, the cleartext data is treated as a single blob and the encrypted document is in JSON format with a single `data` key and a single encrypted value. Sops allows operators to encrypt their documents with multiple master keys. Each of the master key defined in the document is able to decrypt it, allowing users to share documents amongst themselves without sharing keys, or using a PGP key as a backup for KMS. In practice, this is achieved by generating a data key for each document that is used to encrypt all values, and encrypting the data with each master key defined. Being able to decrypt the data key gives access to the document. The integrity of each document is guaranteed by calculating a Message Authentication Code (MAC) that is stored encrypted by the data key. When decrypting a document, the MAC should be recalculated and compared with the MAC stored in the document to verify that no fraudulent changes have been applied. The MAC covers keys and values as well as their ordering.
Package merkletree implements a Merkle Tree capable of storing arbitrary content. A Merkle Tree is a hash tree that provides an efficient way to verify the contents of a set data are present and untampered with. At its core, a Merkle Tree is a list of items representing the data that should be verified. Each of these items is inserted into a leaf node and a tree of hashes is constructed bottom up using a hash of the nodes left and right children's hashes. This means that the root node will effictively be a hash of all other nodes (hashes) in the tree. This property allows the tree to be reproduced and thus verified by on the hash of the root node of the tree. The benefit of the tree structure is verifying any single content entry in the tree will require only nlog2(n) steps in the worst case. Creating a new merkletree requires that the type that the tree will be constructed from implements the Content interface. A slice of the Content items should be created and then passed to the NewTree method. t represents the Merkle Tree and can be verified and manipulated with the API methods described below.
histogram_sketch is an implementation of the Histogram Sketch data structure described in Ben-Haim and Tom-Tov's "A Streaming Parallel Decision Tree Algorithm" in Journal of Machine Learning Research 11 (http://www.jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf). Modifications from Ben-Haim and Tom-Tov's original description in this implementation include: Adaptation of the "Uniform" function described in the paper into a "Quantile" function here. Allowing initial set of centroids to be bootstrapped with the optimal 1-D centroid decomposition based on squared distance to centroid mean via dynamic programming (see Bellman's "A note on cluster analysis and dynamic programming" in Mathematical Biosciences, 18(3-4):311 – 312, 1973 or Haizhou Wang and Mingzhou Song's "Ckmeans.1d.dp: Optimal k-means clustering in one dimension by dynamic programming" in R Journal, 3(2), 2011 (http://journal.r-project.org/archive/2011-2/RJournal_2011-2_Wang+Song.pdf) Storing the min and max value for better estimation of extreme quantiles. Returning exact values for Sum and Quantile when they're known (before any centroid merging happens). Improvements in handling some boundary conditions.
Package gographviz provides parsing for the DOT grammar into an abstract syntax tree representing a graph, analysis of the abstract syntax tree into a more usable structure, and writing back of this structure into the DOT format.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values.
Implementation of an R-Way Trie data structure. A Trie has a root Node which is the base of the tree. Each subsequent Node has a letter and children, which are nodes that have letter values associated with them.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values.
Package skiplist implements skip list based maps and sets. Skip lists are a data structure that can be used in place of balanced trees. Skip lists use probabilistic balancing rather than strictly enforced balancing and as a result the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees. Skip lists were first described in Pugh, William (June 1990). "Skip lists: a probabilistic alternative to balanced trees". Communications of the ACM 33 (6): 668–676 redis like sorted set
Package treemux provides an HTTP request multiplexer that routes using a tree structure. Wildcards ("*") are used to indicate flexible path elements in a resource URL, which can then be mapped to a single Handler (function). Example: With the following route: Paths like these will be handled by `handleCity`: Wildcard cannot be used for parts of an element (i.e. `/foo*/bar` will not work).
Package itc implements the interval tree clock as described in the paper 'Interval Tree Clocks: A Logical Clock for Dynamic Systems' by Paulo Sergio Almeida, Carlos Baquero and Victor Fonte. (http://gsd.di.uminho.pt/members/cbm/ps/itc2008.pdf) Causality tracking mechanisms can be modeled by a set of core operations: fork; event and join, that act on stamps (logical clocks) whose structure is a pair (i, e), formed by an id and an event component that encodes causally known events.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Package qlookup is tool for looking up a data like-tree and create a new structure for every match key.
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt or loopback. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend (either an HTTP endpoint or a shunt) The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package mafsa implements Minimal Acyclic Finite State Automata (MA-FSA) in a space-optimized way as described by Dacuik, Mihov, Watson, and Watson in their paper, "Incremental Construction of Minimal Acyclic Finite-State Automata" (2000). It also implements Minimal Perfect Hashing (MPH) as described by Lucceshi and Kowaltowski in their paper, "Applications of Finite Automata Representing Large Vocabularies" (1992). Unscientifically speaking, this package lets you store large amounts of strings (with Unicode) in memory so that membership queries, prefix lookups, and fuzzy searches are fast. And because minimal perfect hashing is included, you can associate each entry in the tree with more data used by your application. See the README or the end of this documentation for a brief tutorial. MA-FSA structures are a specific type of Deterministic Acyclic Finite State Automaton (DAFSA) which fold equivalent state transitions into each other starting from the suffix of each entry. Typical construction algorithms involve building out the entire tree first, then minimizing the completed tree. However, the method described in the paper above allows the tree to be minimized after every word insertion, provided the insertions are performed in lexicographical order, which drastically reduces memory usage compared to regular prefix trees ("tries"). The goal of this package is to provide a simple, useful, and correct implementation of MA-FSA. Though more complex algorithms exist for removal of items and unordered insertion, these features may be outside the scope of this package. Membership queries are on the order of O(n), where n is the length of the input string, so basically O(1). It is advisable to keep n small since long entries without much in common, especially in the beginning or end of the string, will quickly overrun the optimizations that are available. In those cases, n-gram implementations might be preferable, though these will use more CPU. This package provides two kinds of MA-FSA implementations. One, the BuildTree, facilitates the construction of an optimized tree and allows ordered insertions. The other, MinTree, is effectively read-only but uses significantly less memory and is ideal for production environments where only reads will be occurring. Usually your build process will be separate from your production application, which will make heavy use of reading the structure. To use this package, create a BuildTree and insert your items in lexicographical order: The tree is now compressed to a minimum number of nodes and is ready to be saved. In your production application, then, you can read the file into a MinTree directly: The mt variable is a *MinTree which has the same data as the original BuildTree, but without all the extra "scaffolding" that was required for adding new elements. The package provides some basic read mechanisms.
Package mimetype uses magic number signatures to detect the MIME type of a file. mimetype stores the list of MIME types in a tree structure with "application/octet-stream" at the root of the hierarchy. The hierarchy approach minimizes the number of checks that need to be done on the input and allows for more precise results once the base type of file has been identified. To check if some bytes/reader/file has a specific MIME type, first perform a detect on the input and then test against the MIME. Different from the string comparison, e.g.: mime.String() == "application/zip", mime.Is("application/zip") method has the following advantages: it handles MIME aliases, is case insensitive, ignores optional MIME parameters, and ignores any leading and trailing whitespace. To find the MIME type of some input, perform a detect. In addition to the basic Detect, there are shortcuts for detecting from a reader: or from a file: Considering the definition of a binary file as "a computer file that is not a text file", they can differentiated by searching for the text/plain MIME in it's MIME hierarchy.
Package parsing provides basic search and comparison of HTML documents. To limit storage of references, it uses the net/html package and its Node type to structure HTML. Search a tag in a Node with options Three ways to print a node tree Good to know
Package appdash provides a Go app performance tracing suite. Appdash allows you to trace the end-to-end performance of hierarchically structured applications. You can, for example, measure the time and see the detailed information of each HTTP request and SQL query made by an entire distributed web application. The cmd/appdash tool launches a web front-end which displays a web UI for viewing collected app traces. It is effectively a remote collector which your application can connect and send events to. Timing and application-specific metadata information can be viewed in a nice timeline view for each span (e.g. HTTP request) and it's children. The web front-end can also be embedded in your own Go HTTP server by utilizing the traceapp sub-package, which is effectively what cmd/appdash serves internally. Sub-packages for HTTP and SQL event tracing are provided for use with appdash, which allows it to function equivalently to Google's Dapper and Twitter's Zipkin performance tracing suites. The most high-level structure is a Trace, which represents the performance of an application from start to finish (in an HTTP application, for example, the loading of a web page). A Trace is a tree structure that is made up of several spans, which are just IDs (in an HTTP application, these ID's are passed through the stack via a few special headers). Each span ID has a set of Events that directly correspond to it inside a Collector. These events can be any combination of message, log, time-span, or time-stamped events (the cmd/appdash web UI displays these events as appropriate). Inside your application, a Recorder is used to send events to a Collector, which can be a remote HTTP(S) collector, a local in-memory or persistent collector, etc. Additionally, you can implement the Collector interface yourself and store events however you like.
Package peg implements the Parsing Expression Grammars inspired by LPeg. This package implements the Parsing Expression Grammars (PEGs), a powerful tool for pattern matching and writing top-down parsers. PEGs were designed to focus on expressing the match or parsing progress, rather than to describe what text should be matched as regexps do. The package was strongly influenced by LPeg for lua, see: http://www.inf.puc-rio.br/~roberto/lpeg/. Take a look at it for further readings. There were four methods for PEGs pattern matching: The most general one is `config.Match(pat, text)`, which returns a `*Result` typed match result and an error if any error occured. The config tells the max recursion level, the max repeatition times and whether grouping or capturing is enabled. The default config enables both grouping and capturing, while limits for recursion and repeat are setup to DefaultCallstackLimit and DefaultRepeatLimit. The result of `config.Match(pat, text)` contains: whether pattern was matched, how many bytes were matched, the saved groups and the parser captures. Saved groups are text pieces captured with an optional name. Parser captures are parse trees or user defined structures constructed during the parsing process. Note that, both `MatchedPrefix` and `IsFullMatched` disables capturing. That is, the side effects of user defined constructors won't be triggered. Basic patterns, which matches a single rune or a piece of text, are listed below: Patterns are combined by sequence or alternation: Predicators test if pattern would be matched, but consume no text: Available pattern qualifiers are: Pattern where item separated by sep could be expressed using: Functionalities for groups, references, triggers and injectors: Functionalities for grammars and parsing captures: Greedy qualifiers: The qualifiers are designed to be greedy. Thus, considering the pattern `Seq(Q0(A), B)`, text supposed to be matched by `B` could be swallowed ahead of time by the preceding `A`, which is usually unexpected. It is recommended to wrap `A` with an additional assertion to avoid this. For example, `Seq(Q0(R('0', '9')), S("02468"), T(" is even"))` is incorrect, because the greedy `Q0(R('0', '9'))` would consume the last digit, thus the following `S("02468")` would always dismatch. To make everything right, `Q0(R('0', '9'))` should be replaced by a pattern like `Q0(Seq(R('0', '9'), Test(R('0', '9'))))` (assert one digit follow it), which won't consume the last digit. Unreachable branches: Branch of `Seq` or `Alt` could be unreachable, considering that Seq searches the first dismatch in the sequence, while Alt searches the first match in the choices. Thus, a pattern like `Alt(T("match"), T("match more"))` would get an unexpected match result, becuase longer patterns are not in prior order. Infinite loops: Any pattern which could macth an empty string should not be nested inside qualifiers like `Q0`, `Q1`, `Qn`, for this would cause infinite loops. For example, `Q1(True)` or `Q0(Q0(T("not empty")))` would loop until `config.RepeatLimit` is reached. Left recursion: PEG parsers are top-down, that is, the grammar rules would be expanded immediately, thus a left recursion never terminates until `config.CallstackLimit` is reached. For example, `Let(map[string]Pattern{"var": Seq(T("A"), V("var"))}, V("var"))` terminates, while `Let(map[string]Pattern{"var": Seq(V("var"), T("A"))}, V("var"))` won't terminate until `CallstackLimit` is reached.
Package btree implements in-memory B-Trees of arbitrary degree. This implementation is based on google/btree (http://github.com/google/btree), and much of the code is taken from there. But the API has been changed significantly, particularly around iteration, and support for indexing by position has been added. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap.
Package ksi implements functionality for interacting with KSI service, including the core functions such as signing of data, extending and verifying KSI signatures. Note that the following tutorial is incremental, meaning the parameter names used in example code blocks are defined in previous example blocks. The subpackage log defines logging interface type log.Logger and a basic logger implementation for writing lines to file. By default logging is disabled. In order to enable logging of the API internals, an implementation to a logger has to be registered in the log package, e.g. setting default logger: In order to disable logging, set logger to nil. Almost every method of the API returns an error parameter alongside with a value (if applicable). All returned errors are of type errors.KsiError. For troubleshooting, the KsiError provides following information: Example usage of the KsiError: It is strongly advised to verify the returned error. In case it is not nil, most probably, it is indicating fatal state and requires some sort of recovery logic. Furthermore, all foreseen panics are wrapped into KsiError and returned via a function return error parameter. For simplicity reasons, the error handling in this tutorial is mostly omitted. A signature instance can be created in several ways by providing suitable initializer of type to the signature constructor Following initializers: are quite straight forward and do not require in further explanation. However, following initializers will be explained more deeply. A low level signing (also aggregation) request is responded by Aggregator server with an aggregation response. In order to initialize a signature instance from an aggregation response use: A low level extending request is responded by Extender server with an extending response. In order to initialize a signature instance from an extending response use: In order to initialize a signature instance from a locally aggregated tree use: For more detailed description about the initializers, refer the individual documentation. Note that the signature.BuildNoVerify must be used with care as the returned signature instance will not be verified for internal consistency. The common use case would be to initialize an erroneous KSI signature for troubleshooting. To save the signature to a file or database, the signature content has to be serialized first. Lets assume the data is provided by a io.Reader implementation (e.g. os.File). KSI defines an imprint structure, which basically represents a hash value and consists of a one-octet hash function identifier concatenated with the hash value itself. The subpackage hash provides such structure type hash.Imprint. As only the hash of the original document is signed, we need to create a hash.Imprint object. This can be achieved by using hash.DataHasher object. It can be created from any registered hash algorithm. We will use hash.Default. For more detailed information about hash algorithms and hashing, see subpackage hash documentation. A publications file type publications.File can be constructed using publications.NewFile() method with appropriate initializer type publications.FileBuilder. A more common use case would be to construct a publications file handler by calling publications.NewFileHandler() with desired options of type publications.FileHandlerSetting. To create a new KSI signature for a document hash, a new service.Signer instance has to be constructed. Signing of multiple imprints can be performed in parallel using goroutines. To extend an existing KSI signature, a new service.Extender instance has to be constructed. Extending of multiple signatures can be performed in parallel using goroutines. Signatures are verified according to one or more policies. A verification policy is a set of ordered rules that verify relevant signature properties. Verifying a signature according to a policy results in one of three possible outcomes: The SDK provides the following predefined policies for verification: Internal policy. This policy verifies the consistency of various internal components of the signature without requiring any additional data from the user. The verified components are the aggregation chain, calendar chain (optional), calendar authentication record (optional) and publication record (optional). Additionally, if a document hash is provided, the signature is verified against it. User provided publication string based policy. This policy verifies the signature's publication record against the publication string. If necessary (and permitted), the signature is extended to the user publication. For conclusive results the signature must either contain a publication record with a suitable publication or signature extending must be allowed. Additionally, a publication string must be provided and an Extender should be configured (in case extending is permitted). Publications file based policy. This policy verifies the signature's publication record against a publication in the publication file. If necessary (and permitted), the signature is extended to the publication. For conclusive results the signature must either contain a publication record with a suitable publication or signature extending must be allowed. Additionally, a publications file must be provided for lookup and an Extender should be configured (in case extending is permitted). Key-based policy. This policy verifies the PKI signature and calendar chain data in the calendar authentication record of the signature. For conclusive results, a calendar hash chain and calendar authentication record must be present in the signature. A trusted publication file must be provided for performing lookup of a matching certificate. Calendar-based policy. This policy verifies signature's calendar hash chain against calendar database. If calendar hash chain does not exist, signature is extended to head and its match with received calendar hash chain is verified. For conclusive results the Extender must be configured. Note that input signature is not changed. Default policy. This policy uses the previously mentioned policies in the specified order. Verification starts off with internal verification and if successful, continues with publication-based and/or key-based verification, depending on the availability of calendar chain, calendar authentication record or publication record in the signature. The default policy tries all available verification policies until a signature correctness is proved or disproved and is thus the recommended policy for verification unless some restriction dictates the use of a specific verification policy. Note that all of the policies perform internal verification as a prerequisite to the specific verification and a policy will never result in a success if internal verification fails. Note that the provided signature is never modified. In case any verification step requires a signature extending, only the extended calendar hash chain is retrieved from Extender service and is used for further validation. For the most basic verification the returned error parameter of signature.(Signature).Verify() can be checked. However, most probably the result will be an error because of the lack of essential data. The key to conclusive verification is to provide as much data as possible without assuming too much from the signature itself. For most cases this means that a publications file (or handler) and Extender should be provided. In some cases a permission for using Extender has to be set as well. If the signature needs to be verified against a specific publication, publication string has to be provided, etc. In order to specify optional parameters, signature.VerCtxOption should be used: Note that the constructor of new signature object (signature.New()) will perform verification based on Internal policy by default, unless signature.BuildNoVerify is used. For a detailed verification result, signature.(Policy).Verify() can be used. In this case a verification context must be set up first. To use a proxy, you need to configure the proxy on your operating system. Set the system environment variable: `http_proxy=user:pass@server:port` In the Windows Control Panel: In Linux, add the system variable to `/etc/bashrc`: Configuring authentication is not supported by the Windows Control Panel and Registry. More redundant connection to gateway can be achieved using HA feature of the service package. HA service combines multiple other services, sends requests to all of them in parallel and gives back the first successful one. To configure a HA service, you have to wrap the individual service endpoint configuration options into service.OptHighAvailability option. Further interaction with the constructed haSigner are exactly the same as with basic signer described in previous chapters. The example shows configuration of HA signer. However, similar steps apply to Extender service configuration as well. This product includes package github.com/fullsailor/pkcs7.