Package flow is a helper library around "iter.Seq" types. The library is intend to provide the lacking wheels from standard and/or "x/exp/xiter" library. For example, "Empty" and "Pack" is provided to build a sequence of zero and one item, "Any" and "All" boolean short-circuit is also provided. But "Map", "Filter" and "Reduce" is not provided since that is planned to be in "x/exp/xiter". Also transformation from/to slice/map is in the standard library "slices" and "maps". All function Xxx comes with a Xxx2 version to address the usage between "iter.Seq" and "iter.Seq2", if reasonable. Function with immediate transformation, e.g. key, is not provided, since that users can already achieve with another "Map" operation. Wish someday we can use tuple as primitive generic type, so we don't have to write these Xxx2 stuffs.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package form implements primitives that reduce form boilerplate by allowing the caller to specify their fields exactly once. All values are processed via a chain of transformations that map text into a structured value, and visa versa. Each transformation is encapsulated in a `form.Value` implementation, for instance a `value.Int` will transform text into a Go integer and signal any errors that occur during that transformation. Forms are initialized once with all the fields via a call to `form.Load`. Each field binds an input to a value. By contention, value objects depend on pointer variables, this means you can simply point into a predefined "model" struct. Once the form is submitted, the model will contain the validated values ready to use. However this is only a convention, a value object can arbitrarily handle it's internal state. The following is an example of one way to use the form:
gomrjob - a Go library for hadoop map reduce jobs It provides a lightweight framework for writing map and reduce steps as well as a Runner that will submit jobs and put the steps together.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package warnings implements mechanisms for capturing diagnostics using context.Context. It is designed to provide an easy way to capture warnings without modifying existing function signature. To start capturing warnings, you need to attach a Collector to the context using Attach function. This will create a new context with the collector attached and all the warnings written to the context will be captured. Use Warn or Warnf functions to write warnings to the context. H To read all the warnings from the collector, use ReadAll function Or you can use Scanner function to read warnings one by one. If you need a new context that does not collect warnings anymore, use Detach function. Use Map, Filter, Reduce or Tap helper functions to apply transformations, filters or side-effects to the warnings. Example demonstrates how t o use the warnings package to read and write warnings.
Package par provides utilities for parallelizing computations. Most implementations are built on parallelization via partitioning, i.e. data is divided into partitions, the partitions are mapped to intermediate representations in parallel, then the intermediate representations are combined (reduced) in parallel (where possible) to produce the desired result. This approach to parallelization provides a few key benefits: As with every performance-oriented tool, measure before applying. Most of the provided functionality is only beneficial if the datasets are large enough or the computations are expensive.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: This package will carefully use 'package unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 4 go releases e.g. current go release is go 1.12, so we support unsafe use only from go 1.9+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Running Benchmarks Please see http://github.com/ugorji/go-codec-bench . This package adds some size to any binary that depends on it. This is because we include an auto-generated file: `fast-path.generated.go` to help with performance when encoding/decoding slices and maps of built in numeric, boolean, string and interface{} types. Prior to 2019-05-16, this package could add about 11MB to the size of your binaries. We have now trimmed that in half, and the package contributes about 5.5MB. You can override this by building (or running tests and benchmarks) with the tag: `notfastpath`. With the tag `notfastpath`, we trim that size to about 2.9MB. Be aware that, at least in our representative microbenchmarks for cbor (for example), passing `notfastpath` tag causes up to 33% increase in decoding and 50% increase in encoding speeds. YMMV. Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Command yy processes yacc source code and produces three output files: - A Go file containing definitions of AST nodes. - A Go file containing documentation examples[0] of productions defined by the yacc grammar. - A new yacc file with automatic actions instantiating the AST nodes. To install yy http://godoc.org/modernc.org/yy Invocation: Flags handled by the yy command: 2017-10-23: Added the case directive. A partial example: see the testdata directory and files The three output files were generated by A more complete, working project using yy can be found at http://godoc.org/modernc.org/pl0 Every rule is turned into a definition of a struct type in ast.go (adjust using the -ast flag). The fields of the type are a sum of all productions (cases) of the rule. The generated type will be something like In the above, Foo and Bar fields will be non nill when Case is 0 and Foo and Baz fields will be non nil when Case is 1. The above holds when both Foo and Bar are non terminal symbols. If the production(s) contain also terminal symbols, all those symbols are turned into fields named Token with an optional numeric suffix when more than one non terminal appears in any of the production(s). The generated type will be like In the above, Token will capture '+' when Case is 0. For Case 1, Token will capture '[', Token2 NUMBER and Token3 ']'. MyTokenType is the type defined in the yacc %union as in It is assumed that the lexer passed as an argument to yyParse instantiantes the lval.Token field with additional token information, like the lexeme value, starting position in the file etc. There's a direct mapping, though not in the same order, of yacc pseudo variables $1, $2, ... and fields of the generated node types. For every production not disabled by the yy:ignore direction, yy injects code for instantiating the AST node when the production is reduced. For example, this rule from input.y having no semantic action is turned into in output.y. The default yacc type of AST nodes is 'node' and can be changed using the -node flag. Option-like rules, for example as in in output.y, ie. the empty case does not produce a &RuleOpt{}, but nil instead to conserve space. Generated examples depend on an user supplied function, by default named exampleAST, with a signature This function is called with the production number, as assigned by goyacc and an example string generated by yy. exampleAST should parse the example string and return the AST created when production rule is reduced. When the project's parser is not yet working, a dummy exampleAST function returnin always nil is a workaround. yy inspects rule actions found in the input file. If the action code mentions identifier lx, yy asumes it refers to the yyLexer passed to yyParse. In that case code like is injected near the beginning of the semantic action. The specific type into which the yylex parameter is type asserted is adjustable using the -yylex flag. Similarly, when identifier lhs is mentioned, a short variable definiton of variable lhs, like is injected into the output.y action, replacing the default generated action (see "Concepts") For example, an action in input.y Produces in output.y. The AST examples generator depends on presence of the yy:token directive for all non constant terminal symbols or the presence of the constant token value as in this example The AST examples yy generates must be post processed by using the fe command (http://godoc.org/modernc.org/fe), for example One of the reasons why this is not done automatically by yy is that the above command will succeed only after your project has a _working_ scanner/parser combination. That's not the case in the early stages. yy recognizes specially formatted comments within the input as directives. All directive have the format Note that the directive must follow immediately the comment opening. There must be no empty line(s) between the directive and the production it aplies to. For example The argument of the example directive is a doubly quoted Go string. The string is used instead of an automatically generated example. For example The argument of the field directive is the text up to the end of the comment. The argument is added to the automatically generated fields of the node type of Rule. For example The ignore directive has no arguments. The directive disables generating of the node type of Rule as well as generating code instantiating such node. For example The list directive has no arguments. yy by default detects all left recursive rules. When such rule has name having suffix 'List', yy automatically generates proper reversing of the rule items. Using the list directive enables the same when such a left recursive rule does not have suffix 'List' in its name. For example The argument of the token directive is a doubly quoted Go string. The string is passed to a fmt.Sprinf call with an numeric argument chosen by yy that falls small ASCII letters. The resulting string is used to generate textual token values in examples. For example The argument of the case directive is an identifier, which is appended to the rule name to produce a symbolic and typed case number value. The type name is <RuleName>Case.
Package store provides a disk-backed data structure for use in storing []byte values referenced by 128 bit keys with options for replication. It can handle billions of keys (as memory allows) and full concurrent access across many cores. All location information about each key is stored in memory for speed, but values are stored on disk with the exception of recently written data being buffered first and batched to disk later. This has been written with SSDs in mind, but spinning drives should work also; though storing toc files (Table Of Contents, key location information) on a separate disk from values files is recommended in that case. Each key is two 64bit values, known as keyA and keyB uint64 values. These are usually created by a hashing function of the key name, but that duty is left outside this package. Each modification is recorded with an int64 timestamp that is the number of microseconds since the Unix epoch (see github.com/gholt/brimtime.TimeToUnixMicro). With a write and delete for the exact same timestamp, the delete wins. This allows a delete to be issued for a specific write without fear of deleting any newer write. Internally, each modification is stored with a uint64 timestamp that is equivalent to (brimtime.TimeToUnixMicro(time.Now())<<8) with the lowest 8 bits used to indicate deletions and other bookkeeping items. This means that the allowable time range is 1970-01-01 00:00:00 +0000 UTC (+1 microsecond because all zeroes indicates a missing item) to 4253-05-31 22:20:37.927935 +0000 UTC. There are constants TIMESTAMPMICRO_MIN and TIMESTAMPMICRO_MAX available for bounding usage. There are background tasks for: * TombstoneDiscard: This will discard older tombstones (deletion markers). Tombstones are kept for Config.TombstoneAge seconds and are used to ensure a replicated older value doesn't resurrect a deleted value. But, keeping all tombstones for all time is a waste of resources, so they are discarded over time. Config.TombstoneAge controls how long they should be kept and should be set to an amount greater than several replication passes. * PullReplication: This will continually send out pull replication requests for all the partitions the ValueStore is responsible for, as determined by the Config.MsgRing. The other responsible parties will respond to these requests with data they have that was missing from the pull replication request. Bloom filters are used to reduce bandwidth which has the downside that a very small percentage of items may be missed each pass. A moving salt is used with each bloom filter so that after a few passes there is an exceptionally high probability that all items will be accounted for. * PushReplication: This will continually send out any data for any partitions the ValueStore is *not* responsible for, as determined by the Config.MsgRing. The responsible parties will respond to these requests with acknowledgements of the data they received, allowing the requester to discard the out of place data. * Compaction: TODO description. * Audit: This will verify the data on disk has not been corrupted. It will slowly read data over time and validate checksums. If it finds issues, it will try to remove affected entries the in-memory location map so that replication from other stores will send the information they have and the values will get re-stored locally. In cases where the affected entries cannot be determined, it will make a callback requesting the store be shutdown and restarted; this restart will result in the affected keys being missing and therefore replicated in by other stores. Note that if the disk gets filled past a configurable threshold, any external writes other than deletes will result in error. Internal writes such as compaction and removing successfully push-replicated data will continue. There is also a modified form of ValueStore called GroupStore that expands the primary key to two 128 bit keys and offers a Lookup method which retrieves all matching items for the first key.
Package emacs contains infrastructure to write dynamic modules for Emacs in Go. See Emacs Dynamic Modules and Writing Dynamically-Loaded Modules for background on Emacs modules. To build an Emacs module, you have to build your Go code as a shared C library, e.g., using go build ‑buildmode=c‑shared. If you import the emacs package, the shared library is loadable as an Emacs module. This package contains high-level as well as lower-level functions. The high-level functions help reducing boilerplate when exporting functions to Emacs and calling Emacs functions from Go. The lower-level functions are more type-safe, support more exotic use cases, and have less overhead. At the highest level, use the Export function to export Go functions to Emacs, and the Import function to import Emacs functions so that they can be called from Go. These functions automatically convert between Go and Emacs types as necessary. This export functionality is unrelated to exported Go names or the Cgo export functionality. Functions exported to Emacs don’t have to be exported in the Go or Cgo sense. The automatic type conversion behaves as follows. Go bool values are become the Emacs symbols nil and t. When converting to Go bool, only nil becomes false, any other value becomes true. This matches the Emacs convention that all non-nil values represent a logically true value. Go integral values become Emacs integer values and vice versa. Go floating-point values become Emacs floating-point values and vice versa. Go strings become Emacs strings and vice versa. Go []byte arrays and slices become Emacs unibyte strings. Emacs unibyte strings become Go []byte slices. Other Go arrays and slices become Emacs vectors. Emacs vectors become Go slices. Go maps become Emacs hash tables and vice versa. All types that implement In can be converted to Emacs. All types that implement Out can be converted from Emacs. You can implement In or Out yourself to extend the type conversion machinery. A reflect.Value behaves like its underlying value. Functions exported via Export don’t have a documentation string by default. To add one, pass a Doc value to Export. Since argument names aren’t available at runtime, the documentation by default lacks argument names. Use Usage to add argument names. As an alternative to Import, you can call functions directly using Env.Invoke. Env.Invoke uses the same autoconversion rules as Import, but allows you to specify an arbitrary function value. At a slightly lower level, you can use Env.Call and Env.CallOut to call Emacs functions. These functions use the In and Out interfaces to convert from and to Emacs values. The primary disadvantage of this approach is that you can’t use primitive types like int or string directly. Use wrapper types like Int and String instead. On the other hand, Env.Call and Env.CallOut are more type-safe than [Invoke]. If you use [Call] or [CallOut], the compiler will detect unsupported types. By contrast, when using Export, Import, or [Invoke], they will only be detected at runtime and cause runtime panics or errors. To reduce boilerplate when using Env.Call and Env.CallOut, this package contains several convenience types that implement In or Out. Most primitive types have corresponding wrapper types, such as Int, Float, or String. Types such as List, Cons, or Hash allow you to pass common Lisp structures without much boilerplate. There are also some destructuring types such as ListOut or Uncons. At an even lower level, you can use ExportFunc, ImportFunc, and Env.Funcall as alternatives to Export, Import, and Env.Call, respectively. They have the same behavior, but don’t do any type conversion at all. The fundamental types for interacting with Emacs are Env and Value. They represent Emacs module environments and values as described in Writing Module Functions. These types are opaque, and their zero values are invalid. You can’t use Env and Value values once they are no longer live. This is described in Writing Module Functions and Conversion Between Lisp and Module Values. As a best practice, don’t let these values escape exported functions. You also can’t interact with Emacs from other threads, cf. Writing Module Functions. These rules are a bit subtle, but you are usually on the safe side if you don’t store Env and Value values in struct fields or global variables, and don’t pass them to other goroutines. All functions in this package translate between Go errors and Emacs nonlocal exits. See Nonlocal Exits in Modules. This package represents Emacs nonlocal exits as ordinary Go errors. Each call to a function fetches and clears nonlocal exit information after the actual call and converts it to an error of type Signal or Throw. This means that the Go bindings don’t exhibit the saturating error behavior described at Nonlocal Exits in Modules. Instead, they behave like normal Go functions: an erroneous return doesn’t affect future function calls. When returning from an exported function, this package converts errors back to Emacs nonlocal exits. If you return a Signal or Error, Emacs will raise a signal using the signal function. If you return a Throw, Emacs will throw to a catch using the throw function. If you return any other type of error, Emacs will signal an error of type go‑error, with the error string as signal data. You can define your own error symbols using DefineError. There are also a couple of factory functions for builtin errors such as WrongTypeArgument and OverflowError. You can use Var to define a dynamic variable. This package intentionally doesn’t support wrapping pointers to arbitrary Go values in Emacs user pointer objects. Attempting to do that wouldn’t work well with Go’s garbage collection and CGo’s pointer-passing rules; see Passing pointers. Instead, prefer using handles, e.g. simple integers as map keys. See the “Handles” example. A long-running operation should periodically call Env.ProcessInput to process pending input and to check whether the user wants to quit the operation. If so, you should cancel the operation as soon as possible. See the documentation of Env.ProcessInput for a concrete example. As an alternative, this package provides limited support for asynchronous operations. Such operations are represented using the AsyncHandle type. You can use the Async type to create and manage asynchronous operations. Async requires a way to notify Emacs about a pending asynchronous result; this package supports notification using pipes or sockets. If you want to run code while Emacs is loading the module, use OnInit to register initialization functions. Loading the module will call all initialization functions in order. You can use ERTTest to define ERT tests backed by Go functions. This works similar to Export, but defines ERT tests instead of functions.
Package hashring implements consistent hashing hashring data structure. In general, consistent hashing is all about mapping of object from a very big set of values (e.g. request id) to object from a quite small set (e.g. server address). The word "consistent" means that it can produce consistent mapping on different machines or processes without additional state exchange and communication. For more theory about the subject please see this great document: https://theory.stanford.edu/~tim/s16/l/l1.pdf There are two goals for this hashring implementation: 1) To be efficient in highly concurrent applications by blocking read operations for the least possible time. 2) To correctly handle very rare but yet possible hash collisions, which may break all your eventually consistent application. To reach the first goal hashring uses immutable AVL tree internally, making read operations (getting item for object) blocked only for a tiny amount of time needed to swap the ring's tree root after some write operation (insertion or deletion). The second goal is reached by using ring of size 2^64-1 points, which dramatically reduces the probability of hash collisions (the greater the number of items on the ring, the higher the probability of collisions) and implementation that covers collisions.
Pipeline is a functionnal programming package for the Go language. With Pipeline developpers can use functionnal principles such as map, reduce or filter on their collection types. Pipeline is written in go and inspired by underscore.js , lodash.js and Martin Fowler's pipelines : http://martinfowler.com/articles/collection-pipeline/ author mparaiso <mparaiso@online.fr> copyrights 2014 license GPL-3.0 version 0.1 ## Installating: - Install the Go language - Use 'go get' with a command line interface ## Examples: ### Counting words ```go ``` ### Calculating the total cost of an customer order ```go ``` ## Implemented pipelines - Chunk - Compact - Concat - Difference - Equals - Every - Filter - First - Flatten - GroupBy - Head - IndexOf - Intersection - Last - LastIndexOf - Map - Push - Reduce - ReduceRight - Reverse - Slice - Some - Sort - Splice - Tail - ToMap - Union - Unique - Unshift - Without - Xor - Zip
Package rel implements relational algebra, a set of operations on sets of tuples which result in relations, as defined by E. F. Codd. What folows is a brief introduction to relational algebra. For a more complete introduction, please read C. J. Date's book "Database in Depth". This package uses the same terminology. Relations are sets of named tuples with identical attributes. The primative operations which define the relational algebra are: Union, which adds two sets together. Diff, which removes all elements from one set which exist in another. Restrict, which removes values from a relation that do not satisfy a predicate. Project, which removes zero or more attributes from the tuples the relation is defined on. Rename, which changes the names of the attributes in a relation. Join, which can multiply two relations together (which may have different types of tuples) by returning all combinations of tuples in the two relations where all attributes in one relation are equal to the attributes in the other where the names are the same. This is sometimes called a natural join. This package represents tuples as structs with no unexported or anonymous fields. The fields of the struct are the attributes of the tuple it represents. Attributes are strings with some additional methods that are useful for constructing predicates and candidate keys. They have to be valid field names in go. Predicates are functions which take a tuple and return a boolean, and are used as an input for Restrict expressions. Candidate keys are the sets of attributes which define unique tuples in a relation. Every relation has at least one candidate key, because every relation only contains unique tuples. Some relations may contains several candidate keys. Relations in this package can be either literal, such as a relation from a map of tuples, or an expression of other relations, such as a join between two source relations. Literal Relations can be defined using the rel.New function. Given a slice, map, or channel of tuples, the New function constructs a new "essential" relation, with those values as tuples. Other packages can create essential relations from other sources of data, such as the github.com/jonlawlor/relcsv package, or the github.com/jonlawlor/relsql package. Relational Expressions are generated when one of the methods Project, Restrict, Union, Diff, Join, Rename, Map, or GroupBy. During their construction, the rel package checks to see if they can be distributed over the source relations that they are being called on, and if so, it attempts to push the expressions down the tree of relations as far as they can go, with the end goal of getting pushed all the way to the "essential" source relations. In this way, relational expressions can (hopefully) reduce the amount of computation done in total and / or done in the go runtime.