Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package par provides utilities for parallelizing computations. Most implementations are built on parallelization via partitioning, i.e. data is divided into partitions, the partitions are mapped to intermediate representations in parallel, then the intermediate representations are combined (reduced) in parallel (where possible) to produce the desired result. This approach to parallelization provides a few key benefits: As with every performance-oriented tool, measure before applying. Most of the provided functionality is only beneficial if the datasets are large enough or the computations are expensive.
Package store provides a disk-backed data structure for use in storing []byte values referenced by 128 bit keys with options for replication. It can handle billions of keys (as memory allows) and full concurrent access across many cores. All location information about each key is stored in memory for speed, but values are stored on disk with the exception of recently written data being buffered first and batched to disk later. This has been written with SSDs in mind, but spinning drives should work also; though storing toc files (Table Of Contents, key location information) on a separate disk from values files is recommended in that case. Each key is two 64bit values, known as keyA and keyB uint64 values. These are usually created by a hashing function of the key name, but that duty is left outside this package. Each modification is recorded with an int64 timestamp that is the number of microseconds since the Unix epoch (see github.com/gholt/brimtime.TimeToUnixMicro). With a write and delete for the exact same timestamp, the delete wins. This allows a delete to be issued for a specific write without fear of deleting any newer write. Internally, each modification is stored with a uint64 timestamp that is equivalent to (brimtime.TimeToUnixMicro(time.Now())<<8) with the lowest 8 bits used to indicate deletions and other bookkeeping items. This means that the allowable time range is 1970-01-01 00:00:00 +0000 UTC (+1 microsecond because all zeroes indicates a missing item) to 4253-05-31 22:20:37.927935 +0000 UTC. There are constants TIMESTAMPMICRO_MIN and TIMESTAMPMICRO_MAX available for bounding usage. There are background tasks for: * TombstoneDiscard: This will discard older tombstones (deletion markers). Tombstones are kept for Config.TombstoneAge seconds and are used to ensure a replicated older value doesn't resurrect a deleted value. But, keeping all tombstones for all time is a waste of resources, so they are discarded over time. Config.TombstoneAge controls how long they should be kept and should be set to an amount greater than several replication passes. * PullReplication: This will continually send out pull replication requests for all the partitions the ValueStore is responsible for, as determined by the Config.MsgRing. The other responsible parties will respond to these requests with data they have that was missing from the pull replication request. Bloom filters are used to reduce bandwidth which has the downside that a very small percentage of items may be missed each pass. A moving salt is used with each bloom filter so that after a few passes there is an exceptionally high probability that all items will be accounted for. * PushReplication: This will continually send out any data for any partitions the ValueStore is *not* responsible for, as determined by the Config.MsgRing. The responsible parties will respond to these requests with acknowledgements of the data they received, allowing the requester to discard the out of place data. * Compaction: TODO description. * Audit: This will verify the data on disk has not been corrupted. It will slowly read data over time and validate checksums. If it finds issues, it will try to remove affected entries the in-memory location map so that replication from other stores will send the information they have and the values will get re-stored locally. In cases where the affected entries cannot be determined, it will make a callback requesting the store be shutdown and restarted; this restart will result in the affected keys being missing and therefore replicated in by other stores. Note that if the disk gets filled past a configurable threshold, any external writes other than deletes will result in error. Internal writes such as compaction and removing successfully push-replicated data will continue. There is also a modified form of ValueStore called GroupStore that expands the primary key to two 128 bit keys and offers a Lookup method which retrieves all matching items for the first key.
Package emacs contains infrastructure to write dynamic modules for Emacs in Go. See Emacs Dynamic Modules and Writing Dynamically-Loaded Modules for background on Emacs modules. To build an Emacs module, you have to build your Go code as a shared C library, e.g., using go build ‑buildmode=c‑shared. If you import the emacs package, the shared library is loadable as an Emacs module. This package contains high-level as well as lower-level functions. The high-level functions help reducing boilerplate when exporting functions to Emacs and calling Emacs functions from Go. The lower-level functions are more type-safe, support more exotic use cases, and have less overhead. At the highest level, use the Export function to export Go functions to Emacs, and the Import function to import Emacs functions so that they can be called from Go. These functions automatically convert between Go and Emacs types as necessary. This export functionality is unrelated to exported Go names or the Cgo export functionality. Functions exported to Emacs don’t have to be exported in the Go or Cgo sense. The automatic type conversion behaves as follows. Go bool values are become the Emacs symbols nil and t. When converting to Go bool, only nil becomes false, any other value becomes true. This matches the Emacs convention that all non-nil values represent a logically true value. Go integral values become Emacs integer values and vice versa. Go floating-point values become Emacs floating-point values and vice versa. Go strings become Emacs strings and vice versa. Go []byte arrays and slices become Emacs unibyte strings. Emacs unibyte strings become Go []byte slices. Other Go arrays and slices become Emacs vectors. Emacs vectors become Go slices. Go maps become Emacs hash tables and vice versa. All types that implement In can be converted to Emacs. All types that implement Out can be converted from Emacs. You can implement In or Out yourself to extend the type conversion machinery. A reflect.Value behaves like its underlying value. Functions exported via Export don’t have a documentation string by default. To add one, pass a Doc value to Export. Since argument names aren’t available at runtime, the documentation by default lacks argument names. Use Usage to add argument names. As an alternative to Import, you can call functions directly using Env.Invoke. Env.Invoke uses the same autoconversion rules as Import, but allows you to specify an arbitrary function value. At a slightly lower level, you can use Env.Call and Env.CallOut to call Emacs functions. These functions use the In and Out interfaces to convert from and to Emacs values. The primary disadvantage of this approach is that you can’t use primitive types like int or string directly. Use wrapper types like Int and String instead. On the other hand, Env.Call and Env.CallOut are more type-safe than [Invoke]. If you use [Call] or [CallOut], the compiler will detect unsupported types. By contrast, when using Export, Import, or [Invoke], they will only be detected at runtime and cause runtime panics or errors. To reduce boilerplate when using Env.Call and Env.CallOut, this package contains several convenience types that implement In or Out. Most primitive types have corresponding wrapper types, such as Int, Float, or String. Types such as List, Cons, or Hash allow you to pass common Lisp structures without much boilerplate. There are also some destructuring types such as ListOut or Uncons. At an even lower level, you can use ExportFunc, ImportFunc, and Env.Funcall as alternatives to Export, Import, and Env.Call, respectively. They have the same behavior, but don’t do any type conversion at all. The fundamental types for interacting with Emacs are Env and Value. They represent Emacs module environments and values as described in Writing Module Functions. These types are opaque, and their zero values are invalid. You can’t use Env and Value values once they are no longer live. This is described in Writing Module Functions and Conversion Between Lisp and Module Values. As a best practice, don’t let these values escape exported functions. You also can’t interact with Emacs from other threads, cf. Writing Module Functions. These rules are a bit subtle, but you are usually on the safe side if you don’t store Env and Value values in struct fields or global variables, and don’t pass them to other goroutines. All functions in this package translate between Go errors and Emacs nonlocal exits. See Nonlocal Exits in Modules. This package represents Emacs nonlocal exits as ordinary Go errors. Each call to a function fetches and clears nonlocal exit information after the actual call and converts it to an error of type Signal or Throw. This means that the Go bindings don’t exhibit the saturating error behavior described at Nonlocal Exits in Modules. Instead, they behave like normal Go functions: an erroneous return doesn’t affect future function calls. When returning from an exported function, this package converts errors back to Emacs nonlocal exits. If you return a Signal or Error, Emacs will raise a signal using the signal function. If you return a Throw, Emacs will throw to a catch using the throw function. If you return any other type of error, Emacs will signal an error of type go‑error, with the error string as signal data. You can define your own error symbols using DefineError. There are also a couple of factory functions for builtin errors such as WrongTypeArgument and OverflowError. You can use Var to define a dynamic variable. This package intentionally doesn’t support wrapping pointers to arbitrary Go values in Emacs user pointer objects. Attempting to do that wouldn’t work well with Go’s garbage collection and CGo’s pointer-passing rules; see Passing pointers. Instead, prefer using handles, e.g. simple integers as map keys. See the “Handles” example. A long-running operation should periodically call Env.ProcessInput to process pending input and to check whether the user wants to quit the operation. If so, you should cancel the operation as soon as possible. See the documentation of Env.ProcessInput for a concrete example. As an alternative, this package provides limited support for asynchronous operations. Such operations are represented using the AsyncHandle type. You can use the Async type to create and manage asynchronous operations. Async requires a way to notify Emacs about a pending asynchronous result; this package supports notification using pipes or sockets. If you want to run code while Emacs is loading the module, use OnInit to register initialization functions. Loading the module will call all initialization functions in order. You can use ERTTest to define ERT tests backed by Go functions. This works similar to Export, but defines ERT tests instead of functions.
Package hashring implements consistent hashing hashring data structure. In general, consistent hashing is all about mapping of object from a very big set of values (e.g. request id) to object from a quite small set (e.g. server address). The word "consistent" means that it can produce consistent mapping on different machines or processes without additional state exchange and communication. For more theory about the subject please see this great document: https://theory.stanford.edu/~tim/s16/l/l1.pdf There are two goals for this hashring implementation: 1) To be efficient in highly concurrent applications by blocking read operations for the least possible time. 2) To correctly handle very rare but yet possible hash collisions, which may break all your eventually consistent application. To reach the first goal hashring uses immutable AVL tree internally, making read operations (getting item for object) blocked only for a tiny amount of time needed to swap the ring's tree root after some write operation (insertion or deletion). The second goal is reached by using ring of size 2^64-1 points, which dramatically reduces the probability of hash collisions (the greater the number of items on the ring, the higher the probability of collisions) and implementation that covers collisions.
Pipeline is a functionnal programming package for the Go language. With Pipeline developpers can use functionnal principles such as map, reduce or filter on their collection types. Pipeline is written in go and inspired by underscore.js , lodash.js and Martin Fowler's pipelines : http://martinfowler.com/articles/collection-pipeline/ author mparaiso <mparaiso@online.fr> copyrights 2014 license GPL-3.0 version 0.1 ## Installating: - Install the Go language - Use 'go get' with a command line interface ## Examples: ### Counting words ```go ``` ### Calculating the total cost of an customer order ```go ``` ## Implemented pipelines - Chunk - Compact - Concat - Difference - Equals - Every - Filter - First - Flatten - GroupBy - Head - IndexOf - Intersection - Last - LastIndexOf - Map - Push - Reduce - ReduceRight - Reverse - Slice - Some - Sort - Splice - Tail - ToMap - Union - Unique - Unshift - Without - Xor - Zip
Package rel implements relational algebra, a set of operations on sets of tuples which result in relations, as defined by E. F. Codd. What folows is a brief introduction to relational algebra. For a more complete introduction, please read C. J. Date's book "Database in Depth". This package uses the same terminology. Relations are sets of named tuples with identical attributes. The primative operations which define the relational algebra are: Union, which adds two sets together. Diff, which removes all elements from one set which exist in another. Restrict, which removes values from a relation that do not satisfy a predicate. Project, which removes zero or more attributes from the tuples the relation is defined on. Rename, which changes the names of the attributes in a relation. Join, which can multiply two relations together (which may have different types of tuples) by returning all combinations of tuples in the two relations where all attributes in one relation are equal to the attributes in the other where the names are the same. This is sometimes called a natural join. This package represents tuples as structs with no unexported or anonymous fields. The fields of the struct are the attributes of the tuple it represents. Attributes are strings with some additional methods that are useful for constructing predicates and candidate keys. They have to be valid field names in go. Predicates are functions which take a tuple and return a boolean, and are used as an input for Restrict expressions. Candidate keys are the sets of attributes which define unique tuples in a relation. Every relation has at least one candidate key, because every relation only contains unique tuples. Some relations may contains several candidate keys. Relations in this package can be either literal, such as a relation from a map of tuples, or an expression of other relations, such as a join between two source relations. Literal Relations can be defined using the rel.New function. Given a slice, map, or channel of tuples, the New function constructs a new "essential" relation, with those values as tuples. Other packages can create essential relations from other sources of data, such as the github.com/jonlawlor/relcsv package, or the github.com/jonlawlor/relsql package. Relational Expressions are generated when one of the methods Project, Restrict, Union, Diff, Join, Rename, Map, or GroupBy. During their construction, the rel package checks to see if they can be distributed over the source relations that they are being called on, and if so, it attempts to push the expressions down the tree of relations as far as they can go, with the end goal of getting pushed all the way to the "essential" source relations. In this way, relational expressions can (hopefully) reduce the amount of computation done in total and / or done in the go runtime.
GKES (Go Kafka Event Source) attempts to fill the gaps ub the Go/Kafka library ecosystem. It supplies Exactly Once Semantics (EOS), local state stores and incremental consumer rebalancing to Go Kafka consumers, making it a viable alternative to a traditional Kafka Streams application written in Java. GKES is Go/Kafka library tailored towards the development of Event Sourcing applications, by providing a high-throughput, low-latency Kafka client framework. Using Kafka transactions, it provides for EOS, data integrity and high availability. If you wish to use GKES as straight Kafka consumer, it will fit the bill as well. Though there are plenty of libraries for that, and researching which best fits your use case is time well spent. GKES is not an all-in-one, do-everything black box. Some elements, in particular the StateStore, have been left without comprehensive implementations. A useful and performant local state store rarely has a flat data structure. If your state store does, there are some convenient implementations provided. However, to achieve optimum performance, you will not only need to write a StateStore implementation, but will also need to understand what the proper data structures are for your use case (trees, heaps, maps, disk-based LSM trees or combinations thereof). You can use the provided github.com/aws/go-kafka-event-source/streams/stores.SimpleStore as a starting point. GKES purposefully does not provide a pre-canned way for exposing StateStore data, other than a producing to another Kafka topic. There are as many ways to vend data as there are web applications. Rather than putting effort into inventing yet another one, GKES provides the mechanisms to query StateStores via Interjections. This mechanism can be plugged into whatever request/response mechanism that suits your use-case (gRPC, RESTful HTTP service...any number of web frameworks already in the Go ecosystem). [TODO: provide a simple http example] For this familiar with thw Kafka Streams API, GKES provides for stream `Punctuators“, but we call them `Interjections` (because it sounds cool). Interjections allow you to insert actions into your EventSource at specicifed interval per partition assigned via streams.EventSource.ScheduleInterjection, or at any time via streams.EventSource.Interject. This is useful for bookeeping activities, aggregated metric production or even error handling. Interjections have full access to the StateStore associated with an EventSource and can interact with output topics like any other EventProcessor. One issue that Kafka conumer applications have long suffered from are latency spikes during a consumer rebalance. The cooperative sticky rebalancing introduced by Kafka and implemented by kgo helps resolve this issue. However, once StateStore are thrown into the mix, things get a bit more complicated because initializing the StateStore on a host invloves consuming a compacted TopicPartion from start to end. GKES solves this with the IncrementalRebalancer and takes it one step further. The IncrementalRebalancer rebalances consumer partitions in a controlled fashion, minimizing latency spikes and limiting the blast of a bad deployment. GKES provides conventions for asynchronously processing events on the same Kafka partition while still maintaining data/stream integrity. The AsyncBatcher and AsyncJobScheduler allow you to split a TopicPartition into sub-streams by key, ensuring all events for a partitcular key are processed in order, allowing for parallel processing on a given TopicPartition. For more details, see Async Processing Examples A Kafka transaction is a powerful tool which allows for Exactly Once Semantics (EOS) by linking a consumer offset commit to one or more records that are being produced by your application (a StateStore record for example). The history of Kafka EOS is a long and complicated one with varied degrees of performance and efficiency. Early iterations required one producer transaction per consumer partition, which was very ineffiecient as Topic with 1000 partitions would also require 1000 clients in order to provide EOS. This has since been addressed, but depending on client implementations, there is a high risk of running into "producer fenced" errors as well as reduced throughput. In a traditional Java Kafka Streams application, transactions are committed according to the auto-commit frequency, which defaults to 100ms. This means that your application will only produce readable records every 100ms per partition. The effect of this is that no matter what you do, your tail latency will be at least 100ms and downstream consumers will receive records in bursts rather than a steady stream. For many use cases, this is unaceptable. GKES solves this issue by using a configurable transactional producer pool and a type of "Nagle's algorithm". Uncommitted offsets are added to the transaction pool in sequence. Once a producer has reach its record limit, or enough time has elapsed (10ms by default), the head transaction will wait for any incomplete events to finsh, then flush and commit. While this transaction is committing, GKES continues to process events and optimistically begins a new transaction and produces records on the next producer in pool. Since trasnaction produce in sequence, there is no danger of commit offset overlap or duplicate message processing in the case of a failure. To ensure EOS, your EventSource must use either the IncrementalRebalancer, or kgos cooperative sticky implementation. Though if you're using a StateStore, IncrementalRebalancer should be used to avoid lengthy periods of inactivity during application deployments. Rather than create yet another Kafka driver, GKES is built on top of kgo. This Kafka client was chosen as it (in our testing) has superior throughput and latency profiles compared to other client libraries currently available to Go developers. One other key adavantage is that it provides a migration path to cooperative consumer rebalancing, required for our EOS implementation. Other Go Kafka libraries provide cooperative rebalancing, but do not allow you to migrate froma non-cooperative rebalancing strategy (range, sticky etc.). This is a major roadblock for existing deployemtns as the only migration paths are an entirely new consumer group, or to bring your application completely down and re-deploy with a new rebalance strategy. These migration plans, to put it mildly, are big challenge for zero-downtime/live applications. The kgo package now makes this migration possible with zero downtime. Kgo also has the proper hooks need to implement the IncrementalGroupRebalancer, which is necessary for safe deployments when using a local state store. Kudos to kgo!
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package cslb provides transparent HTTP/HTTPS Client Side Load Balancing for Go programs. Cslb intercepts "net/http" Dial Requests and re-directs them to a preferred set of target hosts based on the load balancing configuration expressed in DNS SRV and TXT Resource Records (RRs). Only one trivial change is required to client applications to benefit from cslb which is to import this package and (if needed) enabling it for non-default http.Transport instances. Cslb processing is triggered by the presence of SRV RRs. If no SRVs exist cslb is benign which means you can deploy your application with cslb and independently activate and deactivate cslb processing for each service at any time. No server-side changes are required at all - apart for possibly dispensing with your server-side load-balancers! Importing cslb automatically enables interception for http.DefaultTransport. In this program snippet: the Dial Request made by http.Get is intercepted and processed by cslb. If the application uses its own http.Transport then cslb processing needs to be activated by calling the cslb.Enable() function, i.e.: The cslb.Enable() function replaces http.Transport.DialContext with its own intercept function. Server-side load-balancers are no panacea. They add deployment and diagnostic complexity, cost, throughput constraints and become an additional point of possible failure. Cslb can help you achieve good load-balancing and fail-over behaviour without the need for *any* server-side load-balancers. This is particularly useful in enterprise and micro-service deployments as well as smaller application deployments where configuring and managing load-balancers is a significant resource drain. Cslb can be used to load-balance across geographically dispersed targets or where "hot stand-by" systems are purposely deployed on diverse infrastructure. When cslb intercepts a http.Transport Dial Request to port 80 or port 443 it looks up SRV RRs as prescribed by RFC2782. That is, _http._tcp.$domain and _https._tcp.$domain respectively. Cslb directs the Dial Request to the highest preference target based on the SRV algorithm. If that Dial Request fails, it tries the next lower preference target until a successful connection is returned or all unique targets fail or it runs out of time. Cslb caches the SRV RRs (or their non-existence) as well as the result of Dial Requests to the SRV targets to optimize subequent intercepted calls and the selection of preferred targets. If no SRV RRs exist, cslb passes the Dial Request on to net.DialContext. Cslb has specific rules about when interception occurs. It normally only considers intercepting port 80 and port 443 however if the "cslb_allports" environment variable is set, cslb intercepts non-standard HTTP ports and maps them to numeric service names. For example http://example.net:8080 gets mapped to _8080._tcp.example.net as the SRV name to resolve. While cslb runs passively by caching the results of previous Dial Requests, it can also run actively by periodically performing health checks on targets. This is useful as an administrator can control health check behaviour to move a target "in and out of rotation" without changing DNS entries and waiting for TTLs to age out. Health checks are also likely to make the application a little more responsive as they are less likely to make a dial attempt to a target that is not working. Active health checking is enabled by the presence of a TXT RR in the sub-domain "_$port._cslb" of the target. E.g. if the SRV target is "s1.example.net:80" then cslb looks for the TXT RR at "_80._cslb.s1.example.net". If that TXT RR contains a URL then it becomes the health check URL. If no TXT RR exists or the contents do not form a valid URL then no active health check is performed for that target. The health check URL does not have to be related to the target in any particular way. It could be a URL to a central monitoring system which performs complicated application level tests and performance monitoring. Or it could be a URL on the target system itself. A health check is considered successful when a GET of the URL returns a 200 status and the content contains the uppercase text "OK" somewhere in the body (See the "cslb_hc_ok" environment variable for how this can be modified). Unless both those conditions are met the target is considered unavailable. Active health checks cease once a target becomes idle for too long and health check Dial Requests are *not* get intercepted by cslb. If your current service exists on a single server called "s1.example.net" and you want to spread the load across additional servers "s2.example.net" and "s3.example.net" and assuming you've added the "cslb" package to your application then the following DNS changes active cslb processing: Current DNS Additional DNS A number of observations about this DNS setup: Cslb maintains a cache of SRV lookups and the health status of targets. Cache entries automatically age out as a form of garbage collection. Removed cache entries stop any associated active health checks. Unfortunately the cache ageing does not have access to the DNS TTLs associated with the SRV RRs so it makes a best-guess at reasonable time-to-live values. The important point to note is that *all* values get periodically refreshed from the DNS. Nothing persists internally forever regardless of the level of activity. This means you can be sure that any changes to your DNS will be noticed by cslb in due course. Cslb optional runs a web server which presents internal statistics on its performance and activity. This web service has *no* access controls so it's best to only run it on a loopback address. Setting the environment variable "cslb_listen" to a listen address activates the status server. E.g.: On initialization the cslb package examines the "cslb_options" environment variable for single letter options which have the following meaning: An example of how this might by used from a shell: Many internal configuration values can be over-ridden with environment variables as shown in this table: Any values which are invalid or fall outside a reasonable range are ignored. Cslb only knows about the results of network connection attempts made by DialContext and the results of any configured health checks. If a service is accepting network connections but not responding to HTTP requests - or responding negatively - the client experiences failures but cslb will be unaware of these failures. The result is that cslb will continue to direct future Dial Requests to that faulty service in accordance with the SRV priorities. If your service is vulnerable to this scenario, active health checks are recommended. This could be something ss simple as an on-service health check which responds based on recent "200 OK" responses in the service log file. Alternatively an on-service monitor which closes the listen socket will also work. In general, defining a failing service is a complicated matter that only the application truly understands. For this reason health checks are used as an intermediary which does understand application level failures and converts them to simple language which cslb groks. While every service is different there are a few general guidelines which apply to most services when using cslb. First of all, run simple health checks if you can and configure them for use by cslb. Second, have each target configured with both ipv4 and ipv6 addresses. This affords two potentially independent network paths to the targets. Furthermore, net.Dialer attempts both ipv4 and ipv6 connections simultaneously which maximizes responsiveness for the client. Third, consider a "canary" target as a low preference (highest numeric value SRV priority) target. If this "canary" target is accessed by cslb clients it tells you they are having trouble reaching their "real" targets. Being able to run a "canary" service is one of the side-benefits of cslb and SRVs. Whan analyzing the Status Web Page or watching the Run Time Control output, observers need to be aware of caching by the http (and possibly other) packages. For example not every call to http.Get() results in a Dial Request as httpClient tries to re-use connections. In a similar vein if you change a DNS entry and don't believe cslb has noticed this change within an appropriate TTL amount of time, be aware that on some platforms the intervening recursive resolvers adjust TTLs as they see fit. For example some home-gamer routers are known to increase short TTLs to values they believe to be a more "appropriate" in an attempt to reduce their cache churn. Perhaps the biggest caveat of all is that cslb relies on being enabled for all http.Transports in use by your application. If you are importing a package (either directly or indirectly) which constructs its own http.Transports then you'll need to modify that package to call cslb.Enable() otherwise those http requests will not be intercepted. Of course if the package is making requests incidental to the core functionality of your application then maybe it doesn't matter and you can leave them be. Something to be aware of. -----
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package maduse is an implementation of the functional concepts filter, map and reduce found in other languages like python, javascript, etc. This package purposely diverge from core principals of how go code should be written, you should therefore think twice before you consider using this package, in most cases for loops is the way to go. The reason for the existence of this package is that it allows for better composability and allows datasets to be more easily explored and evaluated in Go. It's specifically designed as a tool to be used for experimenting with datasets and not as a library intented for production use where performance is critical. The API of the maduse package is completely dynamic which has the down side of no compile time garuantees about the function signatures given to filter, map or reduce. Each method on a maduse.Collection have a description of the handlers they support. Because go doesn't support generics yet, i have create my own notation where <Type> can be replaced with what ever type you want. The <Type> in the function argument has to be the same as in the collection. The output type could be something else or the same as the input, it depends on what you want to achieve. This package is heavily based on reflection and type assertions which can result in runtime panics if used wrongly. TODO(@kvartborg): would like to experiment with a streaming implementation based on the io.Reader interface at some point.
Package gopatch allows structures to be patched in a multitude of configurable ways. Patching is accomplished via Patchers, and a default is initialized for immediate use, found by calling `gopatch.Default()`. All initialized patchers can be used multiple times, and are thread safe. Use the default patcher... Or configure your own! Currently, gopatch cannot patch maps, and cannot replace maps not of the same key AND value types. Additionally, gopatch cannot patch or replace slices/arrays. Maps not of the same key and value types as well as all slices are currently skipped without error. However, it's easy to hook your own patch/replace logic by adding a custom Updater function to `gopatch.Updaters`. Note that these functions are run first to last, so you'll need to inject your function like so: `gopatch.Updaters = append(myUpdater, gopatch.Updaters...)`. Suggestions on how to remove these limitations are welcome. Please add an issue or make a pull request! Struct field names on Go often differ from field names of counterpart object representations such as JSON and BSON. This package refers to these representations as "Field Name Sources". In the above examples, the User struct's EmailAddress has a "struct" field name source (resulting in the field name "EmailAddress"), and a "json" field name source (resulting in the field name "email_address"). Thus, if you want to patch a struct from a map built from JSON bytes, you need to add json tags AND configure your Patcher to use the struct's "json" field name sources. In the second example above, you can see the Patcher has been configured with the "json" field name source and thus can use a JSON-derived map to patch. Patch operations return results or an error when they fail. These results can be used for purposes ranging from logging suspicious activity to persisting the changes to a database row or document. The results consist of an array of fields which were successfully patched, an array of fields which were found in the patch yet not permitted, and a map of the successfully updated fields and their values. Take the above example of patching a nefarious user's account. If `UnpermittedErrors` were false, the patch would succeed and result would not be nil; however, the Unpermitted array would contain "IsBanned", because the patching of that field wasn't permitted. Meanwhile, the "Fields" array would contain "Username" because it was permitted, and Map would contain the same data as `nefariousPatchRequest`, but without "is_banned". Patching behavior can be enforced while defining the structure by using the "gopatch" tag, which overrides configuration. This way, restrictions on how your database model can be patched can be limited while designing the model itself, rather than while designing the endpoint that patches it, reducing the chance of unexpected behavior. When the gopatch tag "patch" is used, the PatchResult's Map field will contain the struct field's values flattened with dot-notation keys created using the absolute path to the struct patched. For example, if the above User struct's `Profile.Motto` field is patched, the result's Map field would contain the following data: `"profile.motto": "..."`. This facilitates the patch-embedded-fields behavior of embedded objects in database servers such as MongoDB. When the gopatch tag "replace" is used, the PatchResult's Map field will contain the struct field's values inside the struct field's patch key, exactly as presented to the Patcher. For example, if the above User struct's `BanData.Length` field is patched, the result's Map field would contain the following data: `"ban_data": map[string]interface{}{ "length": 30 }`. This facilitates the patch-whole-object behavior of embedded objects in database servers such as MongoDB.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package cslb provides transparent HTTP/HTTPS Client Side Load Balancing for Go programs. Cslb intercepts "net/http" Dial Requests and re-directs them to a preferred set of target hosts based on the load balancing configuration expressed in DNS SRV and TXT Resource Records (RRs). Only one trivial change is required to client applications to benefit from cslb which is to import this package and (if needed) enabling it for non-default http.Transport instances. Cslb processing is triggered by the presence of SRV RRs. If no SRVs exist cslb is benign which means you can deploy your application with cslb and independently activate and deactivate cslb processing for each service at any time. No server-side changes are required at all - apart for possibly dispensing with your server-side load-balancers! Importing cslb automatically enables interception for http.DefaultTransport. In this program snippet: the Dial Request made by http.Get is intercepted and processed by cslb. If the application uses its own http.Transport then cslb processing needs to be activated by calling the cslb.Enable() function, i.e.: The cslb.Enable() function replaces http.Transport.DialContext with its own intercept function. Server-side load-balancers are no panacea. They add deployment and diagnostic complexity, cost, throughput constraints and become an additional point of possible failure. Cslb can help you achieve good load-balancing and fail-over behaviour without the need for *any* server-side load-balancers. This is particularly useful in enterprise and micro-service deployments as well as smaller application deployments where configuring and managing load-balancers is a significant resource drain. Cslb can be used to load-balance across geographically dispersed targets or where "hot stand-by" systems are purposely deployed on diverse infrastructure. When cslb intercepts a http.Transport Dial Request to port 80 or port 443 it looks up SRV RRs as prescribed by RFC2782. That is, _http._tcp.$domain and _https._tcp.$domain respectively. Cslb directs the Dial Request to the highest preference target based on the SRV algorithm. If that Dial Request fails, it tries the next lower preference target until a successful connection is returned or all unique targets fail or it runs out of time. Cslb caches the SRV RRs (or their non-existence) as well as the result of Dial Requests to the SRV targets to optimize subequent intercepted calls and the selection of preferred targets. If no SRV RRs exist, cslb passes the Dial Request on to net.DialContext. Cslb has specific rules about when interception occurs. It normally only considers intercepting port 80 and port 443 however if the "cslb_allports" environment variable is set, cslb intercepts non-standard HTTP ports and maps them to numeric service names. For example http://example.net:8080 gets mapped to _8080._tcp.example.net as the SRV name to resolve. While cslb runs passively by caching the results of previous Dial Requests, it can also run actively by periodically performing health checks on targets. This is useful as an administrator can control health check behaviour to move a target "in and out of rotation" without changing DNS entries and waiting for TTLs to age out. Health checks are also likely to make the application a little more responsive as they are less likely to make a dial attempt to a target that is not working. Active health checking is enabled by the presence of a TXT RR in the sub-domain "_$port._cslb" of the target. E.g. if the SRV target is "s1.example.net:80" then cslb looks for the TXT RR at "_80._cslb.s1.example.net". If that TXT RR contains a URL then it becomes the health check URL. If no TXT RR exists or the contents do not form a valid URL then no active health check is performed for that target. The health check URL does not have to be related to the target in any particular way. It could be a URL to a central monitoring system which performs complicated application level tests and performance monitoring. Or it could be a URL on the target system itself. A health check is considered successful when a GET of the URL returns a 200 status and the content contains the uppercase text "OK" somewhere in the body (See the "cslb_hc_ok" environment variable for how this can be modified). Unless both those conditions are met the target is considered unavailable. Active health checks cease once a target becomes idle for too long and health check Dial Requests are *not* get intercepted by cslb. If your current service exists on a single server called "s1.example.net" and you want to spread the load across additional servers "s2.example.net" and "s3.example.net" and assuming you've added the "cslb" package to your application then the following DNS changes active cslb processing: Current DNS Additional DNS A number of observations about this DNS setup: Cslb maintains a cache of SRV lookups and the health status of targets. Cache entries automatically age out as a form of garbage collection. Removed cache entries stop any associated active health checks. Unfortunately the cache ageing does not have access to the DNS TTLs associated with the SRV RRs so it makes a best-guess at reasonable time-to-live values. The important point to note is that *all* values get periodically refreshed from the DNS. Nothing persists internally forever regardless of the level of activity. This means you can be sure that any changes to your DNS will be noticed by cslb in due course. Cslb optional runs a web server which presents internal statistics on its performance and activity. This web service has *no* access controls so it's best to only run it on a loopback address. Setting the environment variable "cslb_listen" to a listen address activates the status server. E.g.: On initialization the cslb package examines the "cslb_options" environment variable for single letter options which have the following meaning: An example of how this might by used from a shell: Many internal configuration values can be over-ridden with environment variables as shown in this table: Any values which are invalid or fall outside a reasonable range are ignored. Cslb only knows about the results of network connection attempts made by DialContext and the results of any configured health checks. If a service is accepting network connections but not responding to HTTP requests - or responding negatively - the client experiences failures but cslb will be unaware of these failures. The result is that cslb will continue to direct future Dial Requests to that faulty service in accordance with the SRV priorities. If your service is vulnerable to this scenario, active health checks are recommended. This could be something ss simple as an on-service health check which responds based on recent "200 OK" responses in the service log file. Alternatively an on-service monitor which closes the listen socket will also work. In general, defining a failing service is a complicated matter that only the application truly understands. For this reason health checks are used as an intermediary which does understand application level failures and converts them to simple language which cslb groks. While every service is different there are a few general guidelines which apply to most services when using cslb. First of all, run simple health checks if you can and configure them for use by cslb. Second, have each target configured with both ipv4 and ipv6 addresses. This affords two potentially independent network paths to the targets. Furthermore, net.Dialer attempts both ipv4 and ipv6 connections simultaneously which maximizes responsiveness for the client. Third, consider a "canary" target as a low preference (highest numeric value SRV priority) target. If this "canary" target is accessed by cslb clients it tells you they are having trouble reaching their "real" targets. Being able to run a "canary" service is one of the side-benefits of cslb and SRVs. Whan analyzing the Status Web Page or watching the Run Time Control output, observers need to be aware of caching by the http (and possibly other) packages. For example not every call to http.Get() results in a Dial Request as httpClient tries to re-use connections. In a similar vein if you change a DNS entry and don't believe cslb has noticed this change within an appropriate TTL amount of time, be aware that on some platforms the intervening recursive resolvers adjust TTLs as they see fit. For example some home-gamer routers are known to increase short TTLs to values they believe to be a more "appropriate" in an attempt to reduce their cache churn. Perhaps the biggest caveat of all is that cslb relies on being enabled for all http.Transports in use by your application. If you are importing a package (either directly or indirectly) which constructs its own http.Transports then you'll need to modify that package to call cslb.Enable() otherwise those http requests will not be intercepted. Of course if the package is making requests incidental to the core functionality of your application then maybe it doesn't matter and you can leave them be. Something to be aware of. -----
package osquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). osquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `osquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. osquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `osquery`, and executed by passing an `*opensearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*opensearchapi.Response` objects). Getting started is extremely simple: osquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `osquery` will generate an array even if there's only one query object.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document. tools/replicate is a command-line tool for replicating databases from one CouchDB server to another. This is mainly for backup purposes, but you can also use -continuous option to set up automatic replication.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package ctxmap implements a registry for global context.Context for use in web applications. Based on work from github.com/gorilla/context, this package simplifies the storage by mapping a pointer to an http.Request to a context.Context. This allows applications to use Google's standard context mechanism to pass state around their web applications, while sticking to the standard http.HandlerFunc implementation for their middleware implementations. As a result of the simplification, the runtime overhead of the package is reduced by 30 to 40 percent in my tests. However, it would be common to store a map of values or a pointer to a structure in the Context object, and my testing does not account for time taken beyond calling Context.Value().
Package bigtable_access_layer is a library designed to ease reading data from Big Table. it features: This library fits fine when you want to store time series data in Big Table like: In those use-cases, each row will be a logical set of events, with its row key built in a way it can be easily identified and will contain a manageable number of events. For instance, a row key could include the region of the weather station, the year and the week number separated with `#` to look like `europe-west1#2021#week1`. Each event is a set of cells sharing the same timestamp, so when the access-layer turns a row into a set of events, it groups cells by timestamp to end with one event / timestamp. Here's an example from Google's documentation: https://cloud.google.com/bigtable/docs/schema-design-time-series#time-buckets Big Table treats column qualifiers as data not metadata, meaning that each character in a column qualifier counts. So the longer a column qualifier is, the more it will use space. As a consequence, Google recommends using the column qualifier as data or if it's not possible, to use short but meaningful column names. It will save space and reduce amount of transferred data. The mapping system is here to turn short column names into human-readable equivalent. It can also be used when the column qualifier contains data, granted it is an "enum" as defined in the mapping. here's an example of a mapping: And now how to use it in the mapper: The repository embeds the mapper to have easy access to mapped data. It also provides a search engine that performs all the required logic to search filtered data and collect all properties for each event.
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Package setpso is a collection of Set based Particle Swarm Optimisers(SPSO) designed for cost functions that map binary patterns to *big.Int cost values. The binary patterns called Parameters is encoded also as a *big.Int. The SPSO is a swarm of entities called Particles that together iteratively hunt for better solutions. The update iteration of the swarm mimics the spirit of the continuous case and is based on set operations. It also includes experimental enhancements to improve the discrete case. For brief introduction, context of use and planned future development read the Readme file at https://github.com/mathrgo/setpso Package setpso lives in a directory that is at the top of a a hierarchy of packages. Package setpso contains two working SPSOs: GPso and CLPso that depend on Pso for all interfaces except Update() needed in package psokit. Packages in setpso/fun is where cost-functions that interface with Pso are usually placed and includes any helper packages for such cost-functions. Package psokit enables a high level multiple run interface where elements for the rum are referred by name to be used in setting up runs of various SPSOs and cost-function combinations and searching for good heuristics. While exploring Parameters for finding reduced cost as returned by the independent cost function the Particle keeps a record of the personal best Parameter achieved so far called Personal-best with a corresponding best cost. The Personal-best status is checked after each update. It represents update Velocity as a vector of weights of the probability of flipping the corresponding bit at the update iteration. At the beginning of the update the velocity is calculated without flipping bits and then the bits are flipped with a probability given by the computed velocity component. During the update, once the bit has been flipped the corresponding probability is set to zero thus avoiding flipping back and keeping the velocity as a vector of flips that are requested with a given probability to move from a given position to a desired one that may improve performance. during the calculation of the velocity of a particle probabilities are combined using an operation called pseudo adding where by default probabilities p,q are pseudo added to give p+q-pq. Alternatives may be considered in the future such as max(p,q) if only to show which is best. The particles are split up into groups with each group containing its own heuristic settings and a list of particles called Targets for it to tend to move towards. each Particle in the group also uses its Personal-best to move towards. Various strategies for targeting other Particle's personal-best Parameters or adapting heuristics can be explored: Pso is not used by its self, since it has no targets, but forms most common interfaces and has a function PUpdate() that does the common velocity update. To create a functioning SPSO extra code is added before PUpdate() to choose Targets and Heuristics which are added by the derived working SPSOs to generate the total update iteration function, Update(). GPso and CLPso are examples of such derived working SPSOs. It is important to note that the collection of groups is stored as mapping from strings to pointers to groups so groups can be accessed by name if necessary although each particle knows which group it belongs to without using the name reference. Also groups can have no particles that belong to the group. At start up there is only one group called "root" which contains all the particles. Additional groups can be formed during initialisation or even during iteration and particles moved between groups as and when required. setpso can be used in low level coding and the higher level run management is provided by the psokit toolkit package in you can quickly get to run an example by going to the setpso/example/runkit1 directory in a terminal then execute
proto gives Go operations like Map, Reduce, Filter, De/Multiplex, etc. without sacrificing idiomatic harmony or speed. The `Proto` type is a stand-in approximation for dynamic typing. Due to Go's powerful casting and type inference idioms, we can approximate the flexibility of dynamic typing even though Go is a statically typed language. Doing so sacrifices some of the benefits of static typing AND some of the benefits of dynamic typing, but this sacrifice is fundamentally required by Go until such time as a true 'Generic' type is implemented. In order to use a Proto-typed variable (from here on out, simply a 'Proto'), you will generally have to cast it to a type that you will know to use based on the semantics of your program. This package (specifically, the other files in this package) provide operations on Proto variables as well as some that make Proto variables out of 'traditionally typed' variables. Many of the operations will require the use of higher-order functions which you will need to provide, and those functions commonly will need you to manually "unbox" (cast-from-Proto) the variable to perform useful operations. Examples of the use of this package can be found in the "*_test.go" files, which contain testing code. A good example of a higher-order function which will commonly need manual-unboxing is the `Filter` function, found in "filter.go". `Filter` takes as its first argument a filter-function which will almost certainly require you to un-box the Proto channel values that it receives to perform the filtering action. Finally, a word on the entire point of this package: while it is named after the Proto type that pervades it and guides its syntax, the true nature of the `proto` package lies in cascading channels, rather than in dynamic typing. In fact this package might be more appropriately named after channels. Maybe `canal` would have been a better name. I wanted to bring the syntax and familiar patterns of functional programming idioms to the power and scalability of Go's goroutines and channels, and found that the syntax made this task very simple. You may find, as I did, that the majority of the code in this package is very 'obvious'. At first I was concerned by this - much of the code is very trivial - but now I feel pleased by the re-usability and natural 'correctness' of `proto`. Look at this package not as some monumental time-saving framework, but rather as a light scaffold for a useful and idiomatic style of programming within the existing constructs of Go. Ultimately, though, you're going to be typing the word Proto an awful lot, and thus the type became the eponym.
Package radix implements all functionality needed to work with redis and all things related to it, including redis cluster, pubsub, sentinel, scanning, lua scripting, and more. For a single node redis instance use NewPool to create a connection pool. The connection pool is thread-safe and will automatically create, reuse, and recreate connections as needed: If you're using sentinel or cluster you should use NewSentinel or NewCluster (respectively) to create your client instead. Any redis command can be performed by passing a Cmd into a Client's Do method. Each Cmd should only be used once. The return from the Cmd can be captured into any appopriate go primitive type, or a slice, map, or struct, if the command returns an array. FlatCmd can also be used if you wish to use non-string arguments like integers, slices, maps, or structs, and have them automatically be flattened into a single string slice. Cmd and FlatCmd can unmarshal results into a struct. The results must be a key/value array, such as that returned by HGETALL. Exported field names will be used as keys, unless the fields have the "redis" tag: Embedded structs will inline that struct's fields into the parent's: The same rules for field naming apply when a struct is passed into FlatCmd as an argument. Cmd and FlatCmd both implement the Action interface. Other Actions include Pipeline, WithConn, and EvalScript.Cmd. Any of these may be passed into any Client's Do method. There are two ways to perform transactions in redis. The first is with the MULTI/EXEC commands, which can be done using the WithConn Action (see its example). The second is using EVAL with lua scripting, which can be done using the EvalScript Action (again, see its example). EVAL with lua scripting is recommended in almost all cases. It only requires a single round-trip, it's infinitely more flexible than MULTI/EXEC, it's simpler to code, and for complex transactions, which would otherwise need a WATCH statement with MULTI/EXEC, it's significantly faster. All the client creation functions (e.g. NewPool) take in either a ConnFunc or a ClientFunc via their options. These can be used in order to set up timeouts on connections, perform authentication commands, or even implement custom pools. All interfaces in this package were designed such that they could have custom implementations. There is no dependency within radix that demands any interface be implemented by a particular underlying type, so feel free to create your own Pools or Conns or Actions or whatever makes your life easier. Errors returned from redis can be explicitly checked for using the the resp2.Error type. Note that the errors.As function, introduced in go 1.13, should be used. Use the golang.org/x/xerrors package if you're using an older version of go. Implicit pipelining is an optimization implemented and enabled in the default Pool implementation (and therefore also used by Cluster and Sentinel) which involves delaying concurrent Cmds and FlatCmds a small amount of time and sending them to redis in a single batch, similar to manually using a Pipeline. By doing this radix significantly reduces the I/O and CPU overhead for concurrent requests. Note that only commands which do not block are eligible for implicit pipelining. See the documentation on Pool for more information about the current implementation of implicit pipelining and for how to configure or disable the feature. For a performance comparisons between Clients with and without implicit pipelining see the benchmark results in the README.md.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document. tools/replicate is a command-line tool for replicating databases from one CouchDB server to another. This is mainly for backup purposes, but you can also use -continuous option to set up automatic replication.