Package sessions provides tools to manage cookie-based web sessions. Special emphasis is placed on security by implementing OWASP recommendations, specifically the following features: In addition, the package provides the following functionality: While simple to use, the package offers a number of extensively documented configuration variables. It also does not assume specific backend technologies. That is, any session storage system may be used simply by implementing the PersistenceLayer interface (or parts of it). This package is currently not written to be run on multiple machines in a distributed fashion without a load balancer that implements sticky sessions. This may change in the future. Although some more configuration needs to happen for production readiness, the package's defaults allow you to get started very quickly. To get access to the current session, simply call Start(): By providing "true" instead of "false" to the Start() function, you can force the creation of a session, even if there previously was none. Once you have a session, you can identify a user across multiple HTTP requests. You may add values to the session, attach a user to it, cause its session ID to change, or destroy it again. For more extensive user-centered functions (for example, signing up, logging in and out, changing passwords etc.), see the subdirectory "users". Before putting your application into production, you must implement the NewSessionCookie function: You may choose a different expiry date, domain, and path but the other fields are mandatory (given that you are using TLS which you certainly should). You can change the name of the cookie by changing the SessionCookie variable. The default is the inconspicuous string "id". The following timeout values may be adjusted according to the requirements of your application: To further reduce the risk of session hijacking attacks, this package checks client IP addresses as well as user agent strings and destroys sessions if changes in these properties were detected. Refer to the AcceptRemoteIP and AcceptChangingUserAgent variables for more information. Sessions are stored in a local RAM cache (which is a simpe map) whose size is defined by the MaxSessionCacheSize variable. If you set this variable to 0, no sessions are held locally. The SessionCacheExpiry controls when a session will be purged from the cache based on the last time it was used. The cache is write-through (except for session last access times). That is, every time a change was made to a session, that change is forwarded to the package's persistence layer to be saved. The persistence layer is a collection of functions which allow the storage and retrieval of objects from a permanent data store. For example, you may use an SQL database or a key-value store. See the documentation of PersistenceLayer for details on the functions to be implemented. If you need to implement only some of the functions, you may use ExtendablePersistenceLayer instead of creating your own class. The package default is to do nothing. That is, sessions are not persisted and therefore will get lost when purged from the local cache or when the application exits. Session objects implement gob.GobEncoder/gob.GobDecoder and json.Marshaler/json.Unmarshaler. While encoding to JSON allows you to easily inspect session attributes in your database, GOB serialization is preferred as it will restore session objects precisely. (For example, the JSON package always unmarshals numbers into floats even if they were originally integers.) It is recommended that you purge your data store from expired sessions from time to time, e.g. by using a cron job, because users may abandon your website which will leave old sessions in your store. It is recommended to call PurgeSessions() before exiting the program. This will cause session last access times to be updated. This package provides a number of utility functions which may be useful in the context of session and user management. The CUID() function generates Base-62 "compact unique identifiers" suitable for user IDs. The RandomID() function generates random Base-62 strings of any length. The ReasonablePassword() function checks the strength of a password based on the recommendations of NIST SP 800-63B.
Package radix implements all functionality needed to work with redis and all things related to it, including redis cluster, pubsub, sentinel, scanning, lua scripting, and more. For a single node redis instance use NewPool to create a connection pool. The connection pool is thread-safe and will automatically create, reuse, and recreate connections as needed: If you're using sentinel or cluster you should use NewSentinel or NewCluster (respectively) to create your client instead. Any redis command can be performed by passing a Cmd into a Client's Do method. Each Cmd should only be used once. The return from the Cmd can be captured into any appopriate go primitive type, or a slice, map, or struct, if the command returns an array. FlatCmd can also be used if you wish to use non-string arguments like integers, slices, maps, or structs, and have them automatically be flattened into a single string slice. Cmd and FlatCmd can unmarshal results into a struct. The results must be a key/value array, such as that returned by HGETALL. Exported field names will be used as keys, unless the fields have the "redis" tag: Embedded structs will inline that struct's fields into the parent's: The same rules for field naming apply when a struct is passed into FlatCmd as an argument. Cmd and FlatCmd both implement the Action interface. Other Actions include Pipeline, WithConn, and EvalScript.Cmd. Any of these may be passed into any Client's Do method. There are two ways to perform transactions in redis. The first is with the MULTI/EXEC commands, which can be done using the WithConn Action (see its example). The second is using EVAL with lua scripting, which can be done using the EvalScript Action (again, see its example). EVAL with lua scripting is recommended in almost all cases. It only requires a single round-trip, it's infinitely more flexible than MULTI/EXEC, it's simpler to code, and for complex transactions, which would otherwise need a WATCH statement with MULTI/EXEC, it's significantly faster. All the client creation functions (e.g. NewPool) take in either a ConnFunc or a ClientFunc via their options. These can be used in order to set up timeouts on connections, perform authentication commands, or even implement custom pools. All interfaces in this package were designed such that they could have custom implementations. There is no dependency within radix that demands any interface be implemented by a particular underlying type, so feel free to create your own Pools or Conns or Actions or whatever makes your life easier. Errors returned from redis can be explicitly checked for using the the resp2.Error type. Note that the errors.As function, introduced in go 1.13, should be used. Use the golang.org/x/xerrors package if you're using an older version of go. Implicit pipelining is an optimization implemented and enabled in the default Pool implementation (and therefore also used by Cluster and Sentinel) which involves delaying concurrent Cmds and FlatCmds a small amount of time and sending them to redis in a single batch, similar to manually using a Pipeline. By doing this radix significantly reduces the I/O and CPU overhead for concurrent requests. Note that only commands which do not block are eligible for implicit pipelining. See the documentation on Pool for more information about the current implementation of implicit pipelining and for how to configure or disable the feature. For a performance comparisons between Clients with and without implicit pipelining see the benchmark results in the README.md.
package funcy implements functional favorites like filter, map, and reduce. You'll get a compile error if you try something that doesn't make sense. For example, using map to run strings.ToLower on a slice of ints: will get you an error like
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package elasticclient provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). elasticclient alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `elasticclient` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. elasticclient provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `elasticclient`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: elasticclient currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `elasticclient` will generate an array even if there's only one query object.
Package pargo provides functions and data structures for expressing parallel algorithms. While Go is primarily designed for concurrent programming, it is also usable to some extent for parallel programming, and this library provides convenience functionality to turn otherwise sequential algorithms into parallel algorithms, with the goal to improve performance. For documentation that provides a more structured overview than is possible with Godoc, see the wiki at https://github.com/exascience/pargo/wiki Pargo provides the following subpackages: pargo/parallel provides simple functions for executing series of thunks or predicates, as well as thunks, predicates, or reducers over ranges in parallel. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/speculative provides speculative implementations of most of the functions from pargo/parallel. These implementations not only execute in parallel, but also attempt to terminate early as soon as the final result is known. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/sequential provides sequential implementations of all functions from pargo/parallel, for testing and debugging purposes. pargo/sort provides parallel sorting algorithms. pargo/sync provides an efficient parallel map implementation. pargo/pipeline provides functions and data structures to construct and execute parallel pipelines. Pargo has been influenced to various extents by ideas from Cilk, Threading Building Blocks, and Java's java.util.concurrent and java.util.stream packages. See http://supertech.csail.mit.edu/papers/steal.pdf for some theoretical background, and the sample chapter at https://mitpress.mit.edu/books/introduction-algorithms for a more practical overview of the underlying concepts.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document. tools/replicate is a command-line tool for replicating databases from one CouchDB server to another. This is mainly for backup purposes, but you can also use -continuous option to set up automatic replication.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
proto gives Go operations like Map, Reduce, Filter, De/Multiplex, etc. without sacrificing idiomatic harmony or speed. The `Proto` type is a stand-in approximation for dynamic typing. Due to Go's powerful casting and type inference idioms, we can approximate the flexibility of dynamic typing even though Go is a statically typed language. Doing so sacrifices some of the benefits of static typing AND some of the benefits of dynamic typing, but this sacrifice is fundamentally required by Go until such time as a true 'Generic' type is implemented. In order to use a Proto-typed variable (from here on out, simply a 'Proto'), you will generally have to cast it to a type that you will know to use based on the semantics of your program. This package (specifically, the other files in this package) provide operations on Proto variables as well as some that make Proto variables out of 'traditionally typed' variables. Many of the operations will require the use of higher-order functions which you will need to provide, and those functions commonly will need you to manually "unbox" (cast-from-Proto) the variable to perform useful operations. Examples of the use of this package can be found in the "*_test.go" files, which contain testing code. A good example of a higher-order function which will commonly need manual-unboxing is the `Filter` function, found in "filter.go". `Filter` takes as its first argument a filter-function which will almost certainly require you to un-box the Proto channel values that it receives to perform the filtering action. Finally, a word on the entire point of this package: while it is named after the Proto type that pervades it and guides its syntax, the true nature of the `proto` package lies in cascading channels, rather than in dynamic typing. In fact this package might be more appropriately named after channels. Maybe `canal` would have been a better name. I wanted to bring the syntax and familiar patterns of functional programming idioms to the power and scalability of Go's goroutines and channels, and found that the syntax made this task very simple. You may find, as I did, that the majority of the code in this package is very 'obvious'. At first I was concerned by this - much of the code is very trivial - but now I feel pleased by the re-usability and natural 'correctness' of `proto`. Look at this package not as some monumental time-saving framework, but rather as a light scaffold for a useful and idiomatic style of programming within the existing constructs of Go. Ultimately, though, you're going to be typing the word Proto an awful lot, and thus the type became the eponym.
Package bigtable_access_layer is a library designed to ease reading data from Big Table. it features: This library fits fine when you want to store time series data in Big Table like: In those use-cases, each row will be a logical set of events, with its row key built in a way it can be easily identified and will contain a manageable number of events. For instance, a row key could include the region of the weather station, the year and the week number separated with `#` to look like `europe-west1#2021#week1`. Each event is a set of cells sharing the same timestamp, so when the access-layer turns a row into a set of events, it groups cells by timestamp to end with one event / timestamp. Here's an example from Google's documentation: https://cloud.google.com/bigtable/docs/schema-design-time-series#time-buckets Big Table treats column qualifiers as data not metadata, meaning that each character in a column qualifier counts. So the longer a column qualifier is, the more it will use space. As a consequence, Google recommends using the column qualifier as data or if it's not possible, to use short but meaningful column names. It will save space and reduce amount of transferred data. The mapping system is here to turn short column names into human-readable equivalent. It can also be used when the column qualifier contains data, granted it is an "enum" as defined in the mapping. here's an example of a mapping: And now how to use it in the mapper: The repository embeds the mapper to have easy access to mapped data. It also provides a search engine that performs all the required logic to search filtered data and collect all properties for each event.
Package ctxmap implements a registry for global context.Context for use in web applications. Based on work from github.com/gorilla/context, this package simplifies the storage by mapping a pointer to an http.Request to a context.Context. This allows applications to use Google's standard context mechanism to pass state around their web applications, while sticking to the standard http.HandlerFunc implementation for their middleware implementations. As a result of the simplification, the runtime overhead of the package is reduced by 30 to 40 percent in my tests. However, it would be common to store a map of values or a pointer to a structure in the Context object, and my testing does not account for time taken beyond calling Context.Value().
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document.
Package cslb provides transparent HTTP/HTTPS Client Side Load Balancing for Go programs. Cslb intercepts "net/http" Dial Requests and re-directs them to a preferred set of target hosts based on the load balancing configuration expressed in DNS SRV and TXT Resource Records (RRs). Only one trivial change is required to client applications to benefit from cslb which is to import this package and (if needed) enabling it for non-default http.Transport instances. Cslb processing is triggered by the presence of SRV RRs. If no SRVs exist cslb is benign which means you can deploy your application with cslb and independently activate and deactivate cslb processing for each service at any time. No server-side changes are required at all - apart for possibly dispensing with your server-side load-balancers! Importing cslb automatically enables interception for http.DefaultTransport. In this program snippet: the Dial Request made by http.Get is intercepted and processed by cslb. If the application uses its own http.Transport then cslb processing needs to be activated by calling the cslb.Enable() function, i.e.: The cslb.Enable() function replaces http.Transport.DialContext with its own intercept function. Server-side load-balancers are no panacea. They add deployment and diagnostic complexity, cost, throughput constraints and become an additional point of possible failure. Cslb can help you achieve good load-balancing and fail-over behaviour without the need for *any* server-side load-balancers. This is particularly useful in enterprise and micro-service deployments as well as smaller application deployments where configuring and managing load-balancers is a significant resource drain. Cslb can be used to load-balance across geographically dispersed targets or where "hot stand-by" systems are purposely deployed on diverse infrastructure. When cslb intercepts a http.Transport Dial Request to port 80 or port 443 it looks up SRV RRs as prescribed by RFC2782. That is, _http._tcp.$domain and _https._tcp.$domain respectively. Cslb directs the Dial Request to the highest preference target based on the SRV algorithm. If that Dial Request fails, it tries the next lower preference target until a successful connection is returned or all unique targets fail or it runs out of time. Cslb caches the SRV RRs (or their non-existence) as well as the result of Dial Requests to the SRV targets to optimize subequent intercepted calls and the selection of preferred targets. If no SRV RRs exist, cslb passes the Dial Request on to net.DialContext. Cslb has specific rules about when interception occurs. It normally only considers intercepting port 80 and port 443 however if the "cslb_allports" environment variable is set, cslb intercepts non-standard HTTP ports and maps them to numeric service names. For example http://example.net:8080 gets mapped to _8080._tcp.example.net as the SRV name to resolve. While cslb runs passively by caching the results of previous Dial Requests, it can also run actively by periodically performing health checks on targets. This is useful as an administrator can control health check behaviour to move a target "in and out of rotation" without changing DNS entries and waiting for TTLs to age out. Health checks are also likely to make the application a little more responsive as they are less likely to make a dial attempt to a target that is not working. Active health checking is enabled by the presence of a TXT RR in the sub-domain "_$port._cslb" of the target. E.g. if the SRV target is "s1.example.net:80" then cslb looks for the TXT RR at "_80._cslb.s1.example.net". If that TXT RR contains a URL then it becomes the health check URL. If no TXT RR exists or the contents do not form a valid URL then no active health check is performed for that target. The health check URL does not have to be related to the target in any particular way. It could be a URL to a central monitoring system which performs complicated application level tests and performance monitoring. Or it could be a URL on the target system itself. A health check is considered successful when a GET of the URL returns a 200 status and the content contains the uppercase text "OK" somewhere in the body (See the "cslb_hc_ok" environment variable for how this can be modified). Unless both those conditions are met the target is considered unavailable. Active health checks cease once a target becomes idle for too long and health check Dial Requests are *not* get intercepted by cslb. If your current service exists on a single server called "s1.example.net" and you want to spread the load across additional servers "s2.example.net" and "s3.example.net" and assuming you've added the "cslb" package to your application then the following DNS changes active cslb processing: Current DNS Additional DNS A number of observations about this DNS setup: Cslb maintains a cache of SRV lookups and the health status of targets. Cache entries automatically age out as a form of garbage collection. Removed cache entries stop any associated active health checks. Unfortunately the cache ageing does not have access to the DNS TTLs associated with the SRV RRs so it makes a best-guess at reasonable time-to-live values. The important point to note is that *all* values get periodically refreshed from the DNS. Nothing persists internally forever regardless of the level of activity. This means you can be sure that any changes to your DNS will be noticed by cslb in due course. Cslb optional runs a web server which presents internal statistics on its performance and activity. This web service has *no* access controls so it's best to only run it on a loopback address. Setting the environment variable "cslb_listen" to a listen address activates the status server. E.g.: On initialization the cslb package examines the "cslb_options" environment variable for single letter options which have the following meaning: An example of how this might by used from a shell: Many internal configuration values can be over-ridden with environment variables as shown in this table: Any values which are invalid or fall outside a reasonable range are ignored. Cslb only knows about the results of network connection attempts made by DialContext and the results of any configured health checks. If a service is accepting network connections but not responding to HTTP requests - or responding negatively - the client experiences failures but cslb will be unaware of these failures. The result is that cslb will continue to direct future Dial Requests to that faulty service in accordance with the SRV priorities. If your service is vulnerable to this scenario, active health checks are recommended. This could be something ss simple as an on-service health check which responds based on recent "200 OK" responses in the service log file. Alternatively an on-service monitor which closes the listen socket will also work. In general, defining a failing service is a complicated matter that only the application truly understands. For this reason health checks are used as an intermediary which does understand application level failures and converts them to simple language which cslb groks. While every service is different there are a few general guidelines which apply to most services when using cslb. First of all, run simple health checks if you can and configure them for use by cslb. Second, have each target configured with both ipv4 and ipv6 addresses. This affords two potentially independent network paths to the targets. Furthermore, net.Dialer attempts both ipv4 and ipv6 connections simultaneously which maximizes responsiveness for the client. Third, consider a "canary" target as a low preference (highest numeric value SRV priority) target. If this "canary" target is accessed by cslb clients it tells you they are having trouble reaching their "real" targets. Being able to run a "canary" service is one of the side-benefits of cslb and SRVs. Whan analyzing the Status Web Page or watching the Run Time Control output, observers need to be aware of caching by the http (and possibly other) packages. For example not every call to http.Get() results in a Dial Request as httpClient tries to re-use connections. In a similar vein if you change a DNS entry and don't believe cslb has noticed this change within an appropriate TTL amount of time, be aware that on some platforms the intervening recursive resolvers adjust TTLs as they see fit. For example some home-gamer routers are known to increase short TTLs to values they believe to be a more "appropriate" in an attempt to reduce their cache churn. Perhaps the biggest caveat of all is that cslb relies on being enabled for all http.Transports in use by your application. If you are importing a package (either directly or indirectly) which constructs its own http.Transports then you'll need to modify that package to call cslb.Enable() otherwise those http requests will not be intercepted. Of course if the package is making requests incidental to the core functionality of your application then maybe it doesn't matter and you can leave them be. Something to be aware of. -----
package osquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). osquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `osquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. osquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `osquery`, and executed by passing an `*opensearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*opensearchapi.Response` objects). Getting started is extremely simple: osquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `osquery` will generate an array even if there's only one query object.
Package maduse is an implementation of the functional concepts filter, map and reduce found in other languages like python, javascript, etc. This package purposely diverge from core principals of how go code should be written, you should therefore think twice before you consider using this package, in most cases for loops is the way to go. The reason for the existence of this package is that it allows for better composability and allows datasets to be more easily explored and evaluated in Go. It's specifically designed as a tool to be used for experimenting with datasets and not as a library intented for production use where performance is critical. The API of the maduse package is completely dynamic which has the down side of no compile time garuantees about the function signatures given to filter, map or reduce. Each method on a maduse.Collection have a description of the handlers they support. Because go doesn't support generics yet, i have create my own notation where <Type> can be replaced with what ever type you want. The <Type> in the function argument has to be the same as in the collection. The output type could be something else or the same as the input, it depends on what you want to achieve. This package is heavily based on reflection and type assertions which can result in runtime panics if used wrongly. TODO(@kvartborg): would like to experiment with a streaming implementation based on the io.Reader interface at some point.
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Package acceptable is a library that handles headers for content negotiation and conditional requests in web applications written in Go. Content negotiation is specified by RFC (http://tools.ietf.org/html/rfc7231) and, less formally, by Ajax (https://en.wikipedia.org/wiki/XMLHttpRequest). * contenttype, headername - bundles of useful constants * data - for holding response data & metadata prior to rendering the response, also allowing lazy evaluation * header - for parsing and representing certain HTTP headers * offer - for enumerating offers to be matched against requests * templates - for rendering Go templates Server-based content negotiation is essentially simple: the user agent sends a request including some preferences (accept headers), then the server selects one of several possible ways of sending the response. Finding the best match depends on you listing your available response representations. This is all rolled up into a simple-to-use function `acceptable.RenderBestMatch`. What this does is described in detail in [RFC-7231](https://tools.ietf.org/html/rfc7231#section-5.3), but it's easy to use in practice. For example The RenderBestMatch function searches for the offer that best matches the request headers. If none match, the response will be 406-Not Acceptable. If you need to have a catch-all case, include offer.Of(p, contenttype.TextAny) or offer.Of(p, contenttype.Any) last in the list. Note that contenttype.TextAny is "text/*" and will typically return "text/plain"; contenttype.Any is "*/*" and will likewise return "application/octet-stream". Each offer will (usually) have a suitable offer.Processor, which is a rendering function. Several are provided (for JSON, XML etc), but you can also provide your own. Also, the templates sub-package provides Go template support. Offers are restricted both by content-type matching and by language matching. The `With` method provides data and specifies its content language. Use it as many times as you need to. The language(s) is matched against the Accept-Language header using the basic prefix algorithm. This means for example that if you specify "en" it will match "en", "en-GB" and everything else beginning with "en-", but if you specify "en-GB", it only matches "en-GB" and "en-GB-*", but won't match "en-US" or even "en". (This implements the basic filtering language matching algorithm defined in https://tools.ietf.org/html/rfc4647.) If your data doesn't need to specify a language, the With method should simply use the "*" wildcard instead. For example, myOffer.With(data, "*") attaches data to myOffer and doesn't restrict the offer to any particular language. The language wildcard could also be used as a catch-all case if it comes after one or more With with a specified language. However, the standard (RFC-7231) advises that a response should be returned even when language matching has failed; RenderBestMatch will do this by picking the first language listed as a fallback, so the catch-all case is only necessary if its data is different to that of the first case. The response data (en and fr above) can be structs, slices, maps, or other values that the rendering processors accept. They will be wrapped as data.Data values, which you can provid explicitly. These allow for lazy evaluation of the content and also support conditional requests. This comes into its own when there are several offers each with their own data model - if these were all to be read from the database before selection of the best match, all but one would be wasted. Lazy evaluation of the selected data easily overcomes this problem. Besides the data and error returned values, some metadata can optionally be returned. This is the basis for easy support for conditional requests (see [RFC-7232](https://tools.ietf.org/html/rfc7232)). If the metadata is nil, it is simply ignored. However, if it contains a hash of the data (e.g. via MD5) known as the entity tag or etag, then the response will have an ETag header. User agents that recognise this will later repeat the request along with an If-None-Match header. If present, If-None-Match is recognised before rendering starts and a successful match will avoid the need for any rendering. Due to the lazy content fetching, it can reduce unnecessary database traffic etc. The metadata can also carry the last-modified timestamp of the data, if this is known. When present, this becomes the Last-Modified header and is checked on subsequent requests using the If-Modified-Since. The template and language parameters are used for templated/web content data; otherwise they are ignored. Sequences of data can also be produced. This is done with data.Sequence() and this takes the same supplier function as used by data.Lazy(). The difference is that, in a sequence, the supplier function will be called repeatedly until its result value is nil. All the values will be streamed in the response (how this is done depends on the rendering processor. Most responses will be UTF-8, sometimes UTF-16. All other character sets (e.g. Windows-1252) are now strongly deprecated. However, legacy support for other character sets is provided. Transcoding is implemented by Match.ApplyHeaders so that the Accept-Charset content negotiation can be implemented. This depends on finding an encoder in golang.org/x/text/encoding/htmlindex (this has an extensive list, however no other encoders are supported). Whenever possible, responses will be UTF-8. Not only is this strongly recommended, it also avoids any transcoding processing overhead. It means for example that "Accept-Charset: iso-8859-1, utf-8" will ignore the iso-8859-1 preference because it can use UTF-8. Conversely, "Accept-Charset: iso-8859-1" will always have to transcode into ISO-8859-1 because there is no UTF-8 option.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package setpso is a collection of Set based Particle Swarm Optimisers(SPSO) designed for cost functions that map binary patterns to *big.Int cost values. The binary patterns called Parameters is encoded also as a *big.Int. The SPSO is a swarm of entities called Particles that together iteratively hunt for better solutions. The update iteration of the swarm mimics the spirit of the continuous case and is based on set operations. It also includes experimental enhancements to improve the discrete case. For brief introduction, context of use and planned future development read the Readme file at https://github.com/mathrgo/setpso Package setpso lives in a directory that is at the top of a a hierarchy of packages. Package setpso contains two working SPSOs: GPso and CLPso that depend on Pso for all interfaces except Update() needed in package psokit. Packages in setpso/fun is where cost-functions that interface with Pso are usually placed and includes any helper packages for such cost-functions. Package psokit enables a high level multiple run interface where elements for the rum are referred by name to be used in setting up runs of various SPSOs and cost-function combinations and searching for good heuristics. While exploring Parameters for finding reduced cost as returned by the independent cost function the Particle keeps a record of the personal best Parameter achieved so far called Personal-best with a corresponding best cost. The Personal-best status is checked after each update. It represents update Velocity as a vector of weights of the probability of flipping the corresponding bit at the update iteration. At the beginning of the update the velocity is calculated without flipping bits and then the bits are flipped with a probability given by the computed velocity component. During the update, once the bit has been flipped the corresponding probability is set to zero thus avoiding flipping back and keeping the velocity as a vector of flips that are requested with a given probability to move from a given position to a desired one that may improve performance. during the calculation of the velocity of a particle probabilities are combined using an operation called pseudo adding where by default probabilities p,q are pseudo added to give p+q-pq. Alternatives may be considered in the future such as max(p,q) if only to show which is best. The particles are split up into groups with each group containing its own heuristic settings and a list of particles called Targets for it to tend to move towards. each Particle in the group also uses its Personal-best to move towards. Various strategies for targeting other Particle's personal-best Parameters or adapting heuristics can be explored: Pso is not used by its self, since it has no targets, but forms most common interfaces and has a function PUpdate() that does the common velocity update. To create a functioning SPSO extra code is added before PUpdate() to choose Targets and Heuristics which are added by the derived working SPSOs to generate the total update iteration function, Update(). GPso and CLPso are examples of such derived working SPSOs. It is important to note that the collection of groups is stored as mapping from strings to pointers to groups so groups can be accessed by name if necessary although each particle knows which group it belongs to without using the name reference. Also groups can have no particles that belong to the group. At start up there is only one group called "root" which contains all the particles. Additional groups can be formed during initialisation or even during iteration and particles moved between groups as and when required. setpso can be used in low level coding and the higher level run management is provided by the psokit toolkit package in you can quickly get to run an example by going to the setpso/example/runkit1 directory in a terminal then execute
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document. tools/replicate is a command-line tool for replicating databases from one CouchDB server to another. This is mainly for backup purposes, but you can also use -continuous option to set up automatic replication.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package tdigest provides a simple and (memory) efficient way to compute distribution quartiles on the fly from a potentially large number of data points. It is (freely) inspired by the paper from T.Dunning : https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf As new data points are added, the key parameter is the choice of the Sizer, that determines how aggressively the buckets used to aggregate the data are merged. The merging process happens regularly when data points are added, and before computing anything. A map-reduce approach is also achievable, since TD structures can be computed in parallel and then merged. When merging, both Sizer are expected to be identical.
Package dom provides GopherJS and Go bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document. tools/replicate is a command-line tool for replicating databases from one CouchDB server to another. This is mainly for backup purposes, but you can also use -continuous option to set up automatic replication.
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
gowrapmx4j is a base library of types to assist UnMarshalling and Querying MX4J data. MX4J is a very useful service which makes JMX data accessible via HTTP. Unfortunately little is done to improve the data's representation and it is returned as dense raw XML via an API frought with perilous query variables which are poorly documented. The types and unmarshalling structures defined here have sorted out some of the XML saddness returned from MX4J and makes it easier to operate on the data stuctures. Why? Java databases are still industry standard and there's a lot of mindshare built around them. Sadly their tools can be very arcane or non-existant. This library is built specifically to help surface useful information from Cassandra's MX4J endpoint to assist in debugging, monitoring, and management. Basic API Primer: Types* are the basic structs created to aid interaction/querying MX4J, unmarshall data from XML endpoints. The Registry is a concurrent safe map of MX4J data which is updated when queries are made. This is to reduce the number of calls to MX4J if multiple goroutines want to access the data. The Distill* API aids in cleaning up the data structures created from unmarshalling the XML API. DistillAttribute and DistillAttributeTypes are the main functions which return clean data structures for http endpoints. Example: Showcases some ways to use features of gowrapmx4j
Package maduce is an implementation of the functional concepts filter, map and reduce found in other languages like python, javascript, etc. This package purposely diverge from core principals of how go code should be written, you should therefore think twice before you consider using this package, in most cases for loops is the way to go. The reason for the existence of this package is that it allows for better composability and allows datasets to be more easily explored and evaluated in Go. It's specifically designed as a tool to be used for experimenting with datasets and not as a library intented for production use where performance is critical. The API of the maduce package is completely dynamic which has the down side of no compile time garuantees about the function signatures given to filter, map or reduce. Each method on a maduce.Collection have a description of the handlers they support. Because go doesn't support generics yet, i have create my own notation where <Type> can be replaced with what ever type you want. The <Type> in the function argument has to be the same as in the collection. The output type could be something else or the same as the input, it depends on what you want to achieve. This package is heavily based on reflection and type assertions which can result in runtime panics if used wrongly. TODO(@kvartborg): would like to experiment with a streaming implementation based on the io.Reader interface at some point.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package diff calculates the differences between two sequences. It implements the algorithm from "An Algorithm for Differential File Comparison" by Hunt and McIlroy: https://www.cs.dartmouth.edu/~doug/diff.pdf For flexibility, the algorithm itself operates on a sequence of integers. This allows you to compare arbitrary sequences, as long as you can map their elements to a uint64. To generate a diff for text, the inputs need to be split and hashed. Splitting should be done to reduce algorithmic complexity (which is O(m•n•log(m)) in the worst case). It also creates diffs that are better suited for human consumption. Hashing means that collisions are possible, but they should be rare enough in practice to not matter. If they do happen, the resulting diff might be subpoptimal.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package mutex provides a collection of thread-safe data structures using generics in Go. It offers a Value type for lock-protected values, a Numeric type for thread-safe numeric operations, and a Map type for a concurrent map with type safety. These structures are designed to be easy to use, providing a simple and familiar interface similar to well known atomic.Value and sync.Map, but with added type safety and the flexibility of generics. The package aims to simplify concurrent programming by ensuring safe access to shared data and reducing the boilerplate code associated with mutexes.