Package gojay implements encoding and decoding of JSON as defined in RFC 7159. The mapping between JSON and Go values is described in the documentation for the Marshal and Unmarshal functions. It aims at performance and usability by relying on simple interfaces to decode and encode structures, slices, arrays and even channels. On top of the simple interfaces to implement, gojay provides lots of helpers to decode and encode multiple of different types natively such as bit.Int, sql.NullString or time.Time
Package bitset implements bitsets, a mapping between non-negative integers and boolean values. It should be more efficient than map[uint] bool. It provides methods for setting, clearing, flipping, and testing individual integers. But it also provides set intersection, union, difference, complement, and symmetric operations, as well as tests to check whether any, all, or no bits are set, and querying a bitset's current length and number of positive bits. BitSets are expanded to the size of the largest set bit; the memory allocation is approximately Max bits, where Max is the largest set bit. BitSets are never shrunk. On creation, a hint can be given for the number of bits that will be used. Many of the methods, including Set,Clear, and Flip, return a BitSet pointer, which allows for chaining. Example use: As an alternative to BitSets, one should check out the 'big' package, which provides a (less set-theoretical) view of bitsets.
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field and Cross Struct validation for nested structs. Validate A simple example usage: The error can be used like so Both StructErrors and FieldError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. NOTE2: comma is the default separator of validation tags, if you wish to have a comma included within the parameter i.e. excludesall=, you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package bimap provides a threadsafe bidirectional map
Package model provides utilities to transform from the OpenTelemetry OTLP data model to the Datadog Agent data model. This module is used in the Datadog Agent and the OpenTelemetry Collector Datadog exporter. End-user behavior is stable, but there are no stability guarantees on its public Go API. Nonetheless, if editing try to avoid breaking API changes if possible and double check the API usage on all module dependents. The 'attributes' packages provide utilities for semantic conventions mapping, while the translator model translates telemetry signals (currently only metrics are translated).
Package docopt parses command-line arguments based on a help message. Given a conventional command-line help message, docopt processes the arguments. See https://github.com/docopt/docopt#help-message-format for a description of the help message format. This package exposes three different APIs, depending on the level of control required. The first, simplest way to parse your docopt usage is to just call: This will use os.Args[1:] as the argv slice, and use the default parser options. If you want to provide your own version string and args, then use: If the last parameter (version) is a non-empty string, it will be printed when --version is given in the argv slice. Finally, we can instantiate our own docopt.Parser which gives us control over how things like help messages are printed and whether to exit after displaying usage messages, etc. In particular, setting your own custom HelpHandler function makes unit testing your own docs with example command line invocations much more enjoyable. All three of these return a map of option names to the values parsed from argv, and an error or nil. You can get the values using the helpers, or just treat it as a regular map: Additionally, you can `Bind` these to a struct, assigning option values to the exported fields of that struct, all at once.
Package gabs implements a wrapper around creating and parsing unknown or dynamic map structures resulting from JSON parsing.
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package securecookie encodes and decodes authenticated and optionally encrypted cookie values. Secure cookies can't be forged, because their values are validated using HMAC. When encrypted, the content is also inaccessible to malicious eyes. To use it, first create a new SecureCookie instance: The hashKey is required, used to authenticate the cookie value using HMAC. It is recommended to use a key with 32 or 64 bytes. The blockKey is optional, used to encrypt the cookie value -- set it to nil to not use encryption. If set, the length must correspond to the block size of the encryption algorithm. For AES, used by default, valid lengths are 16, 24, or 32 bytes to select AES-128, AES-192, or AES-256. Strong keys can be created using the convenience function GenerateRandomKey(). Once a SecureCookie instance is set, use it to encode a cookie value: Later, use the same SecureCookie instance to decode and validate a cookie value: We stored a map[string]string, but secure cookies can hold any value that can be encoded using encoding/gob. To store custom types, they must be registered first using gob.Register(). For basic types this is not needed; it works out of the box.
Package expression provides types and functions to create Amazon DynamoDB Expression strings, ExpressionAttributeNames maps, and ExpressionAttributeValues maps. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. The following example shows a sample ConditionExpression and how to build an equivalent ConditionBuilder In order to retrieve the formatted DynamoDB Expression strings, call the getter methods on the Expression struct. To create the Expression struct, call the Build() method on the Builder struct. Because some input structs, such as QueryInput, can have multiple DynamoDB Expressions, multiple structs representing various DynamoDB Expressions can be added to the Builder struct. The following example shows a generic usage of the whole package. The ExpressionAttributeNames and ExpressionAttributeValues member of the input struct must always be assigned when using the Expression struct because all item attribute names and values are aliased. That means that if the ExpressionAttributeNames and ExpressionAttributeValues member is not assigned with the corresponding Names() and Values() methods, the DynamoDB operation will run into a logic error.
Package toml provides facilities for decoding and encoding TOML configuration files via reflection. There is also support for delaying decoding with the Primitive type, and querying the set of keys in a TOML document with the MetaData type. The specification implemented: https://github.com/toml-lang/toml The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify whether a file is a valid TOML document. It can also be used to print the type of each key in a TOML document. There are two important types of tests used for this package. The first is contained inside '*_test.go' files and uses the standard Go unit testing framework. These tests are primarily devoted to holistically testing the decoder and encoder. The second type of testing is used to verify the implementation's adherence to the TOML specification. These tests have been factored into their own project: https://github.com/BurntSushi/toml-test The reason the tests are in a separate project is so that they can be used by any implementation of TOML. Namely, it is language agnostic. Example StrictDecoding shows how to detect whether there are keys in the TOML document that weren't decoded into the value given. This is useful for returning an error to the user if they've included extraneous fields in their configuration. Example UnmarshalTOML shows how to implement a struct type that knows how to unmarshal itself. The struct must take full responsibility for mapping the values passed into the struct. The method may be used with interfaces in a struct in cases where the actual type is not known until the data is examined. Example Unmarshaler shows how to decode TOML strings into your own custom data type.
Package inject provides utilities for mapping and injecting dependencies in various ways.