Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: This package will carefully use 'package unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=codec.safe ...'. This library works with both the standard `gc` and the `gccgo` compilers. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. There is also seamless support provided for registering an extension (with a tag) but letting the encoding mechanism default to the standard way. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'codec.safe' to run tests or build in safe mode. e.g. Running Benchmarks Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package stats is a well tested and comprehensive statistics library package with no dependencies. Example Usage: MIT License Copyright (c) 2014-2020 Montana Flynn (https://montanaflynn.com)
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
test.go is a "Go script" for running Vitess tests. It runs each test in its own Docker container for hermeticity and (potentially) parallelism. If a test fails, this script will save the output in _test/ and continue with other tests. Before using it, you should have Docker 1.5+ installed, and have your user in the group that lets you run the docker command without sudo. The first time you run against a given flavor, it may take some time for the corresponding bootstrap image (vitess/bootstrap:<flavor>) to be downloaded. It is meant to be run from the Vitess root, like so: For a list of options, run:
Package mapset implements a simple and set collection. Items stored within it are unordered and unique. It supports typical set operations: membership testing, intersection, union, difference, symmetric difference and cloning. Package mapset provides two implementations of the Set interface. The default implementation is safe for concurrent access, but a non-thread-safe implementation is also provided for programs that can benefit from the slight speed improvement and that can enforce mutual exclusion through other means.
Package seelog implements logging functionality with flexible dispatching, filtering, and formatting. To create a logger, use one of the following constructors: Example: The "defer" line is important because if you are using asynchronous logger behavior, without this line you may end up losing some messages when you close your application because they are processed in another non-blocking goroutine. To avoid that you explicitly defer flushing all messages before closing. Logger created using one of the LoggerFrom* funcs can be used directly by calling one of the main log funcs. Example: Having loggers as variables is convenient if you are writing your own package with internal logging or if you have several loggers with different options. But for most standalone apps it is more convenient to use package level funcs and vars. There is a package level var 'Current' made for it. You can replace it with another logger using 'ReplaceLogger' and then use package level funcs: Last lines do the same as In this example the 'Current' logger was replaced using a 'ReplaceLogger' call and became equal to 'logger' variable created from config. This way you are able to use package level funcs instead of passing the logger variable. Main seelog point is to configure logger via config files and not the code. The configuration is read by LoggerFrom* funcs. These funcs read xml configuration from different sources and try to create a logger using it. All the configuration features are covered in detail in the official wiki: https://github.com/cihub/seelog/wiki. There are many sections covering different aspects of seelog, but the most important for understanding configs are: After you understand these concepts, check the 'Reference' section on the main wiki page to get the up-to-date list of dispatchers, receivers, formats, and logger types. Here is an example config with all these features: This config represents a logger with adaptive timeout between log messages (check logger types reference) which logs to console, all.log, and errors.log depending on the log level. Its output formats also depend on log level. This logger will only use log level 'debug' and higher (minlevel is set) for all files with names that don't start with 'test'. For files starting with 'test' this logger prohibits all levels below 'error'. Although configuration using code is not recommended, it is sometimes needed and it is possible to do with seelog. Basically, what you need to do to get started is to create constraints, exceptions and a dispatcher tree (same as with config). Most of the New* functions in this package are used to provide such capabilities. Here is an example of configuration in code, that demonstrates an async loop logger that logs to a simple split dispatcher with a console receiver using a specified format and is filtered using a top-level min-max constraints and one expection for the 'main.go' file. So, this is basically a demonstration of configuration of most of the features: To learn seelog features faster you should check the examples package: https://github.com/cihub/seelog-examples It contains many example configs and usecases.
Package gotests contains the core logic for generating table-driven tests.
Package bloom provides data structures and methods for creating Bloom filters. A Bloom filter is a representation of a set of _n_ items, where the main requirement is to make membership queries; _i.e._, whether an item is a member of a set. A Bloom filter has two parameters: _m_, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and _k_, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo _m_). Set membership is done by _testing_ whether the bits at each value of the hashing functions (again, modulo _m_) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose _k_ and _m_ correctly. In this implementation, the hashing functions used is murmurhash, a non-cryptographic hashing function. This implementation accepts keys for setting as testing as []byte. Thus, to add a string item, "Love": Similarly, to test if "Love" is in bloom: For numeric data, I recommend that you look into the binary/encoding library. But, for example, to add a uint32 to the filter: Finally, there is a method to estimate the false positive rate of a particular Bloom filter for a set of size _n_: Given the particular hashing scheme, it's best to be empirical about this. Note that estimating the FP rate will clear the Bloom filter.
Package bloom provides data structures and methods for creating Bloom filters. A Bloom filter is a representation of a set of _n_ items, where the main requirement is to make membership queries; _i.e._, whether an item is a member of a set. A Bloom filter has two parameters: _m_, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and _k_, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo _m_). Set membership is done by _testing_ whether the bits at each value of the hashing functions (again, modulo _m_) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose _k_ and _m_ correctly. In this implementation, the hashing functions used is murmurhash, a non-cryptographic hashing function. This implementation accepts keys for setting as testing as []byte. Thus, to add a string item, "Love": Similarly, to test if "Love" is in bloom: For numeric data, I recommend that you look into the binary/encoding library. But, for example, to add a uint32 to the filter: Finally, there is a method to estimate the false positive rate of a Bloom filter with _m_ bits and _k_ hashing functions for a set of size _n_: You can use it to validate the computed m, k parameters: or You would expect ActualfpRate to be close to the desired fp in these cases. The EstimateFalsePositiveRate function creates a temporary Bloom filter. It is also relatively expensive and only meant for validation.
Package miniredis is a pure Go Redis test server, for use in Go unittests. There are no dependencies on system binaries, and every server you start will be empty. import "github.com/alicebob/miniredis/v2" Start a server with `s := miniredis.RunT(t)`, it'll be shutdown via a t.Cleanup(). Or do everything manual: `s, err := miniredis.Run(); defer s.Close()` Point your Redis client to `s.Addr()` or `s.Host(), s.Port()`. Set keys directly via s.Set(...) and similar commands, or use a Redis client. For direct use you can select a Redis database with either `s.Select(12); s.Get("foo")` or `s.DB(12).Get("foo")`.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package govpp provides the entry point to govpp functionality. It provides the API for connecting the govpp core to VPP either using the default VPP adapter, or using the adapter previously set by SetAdapter function (useful mostly just for unit/integration tests with mocked VPP adapter). To create a connection to VPP, use govpp.Connect function: Make sure you close the connection after using it. If the connection is not closed, it will leak resources. Please note that only one VPP connection is allowed for a single process. In case you need to mock the connection to VPP (e.g. for testing), use the govpp.SetAdapter function before calling govpp.Connect. Once connected to VPP, use the functions from the api package to communicate with it.
Package codebuild provides the API client, operations, and parameter types for AWS CodeBuild. CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for the most popular programming languages and build tools, such as Apache Maven, Gradle, and more. You can also fully customize build environments in CodeBuild to use your own build tools. CodeBuild scales automatically to meet peak build requests. You pay only for the build time you consume. For more information about CodeBuild, see the CodeBuild User Guide.
Package bitset implements bitsets, a mapping between non-negative integers and boolean values. It should be more efficient than map[uint] bool. It provides methods for setting, clearing, flipping, and testing individual integers. But it also provides set intersection, union, difference, complement, and symmetric operations, as well as tests to check whether any, all, or no bits are set, and querying a bitset's current length and number of positive bits. BitSets are expanded to the size of the largest set bit; the memory allocation is approximately Max bits, where Max is the largest set bit. BitSets are never shrunk. On creation, a hint can be given for the number of bits that will be used. Many of the methods, including Set,Clear, and Flip, return a BitSet pointer, which allows for chaining. Example use: As an alternative to BitSets, one should check out the 'big' package, which provides a (less set-theoretical) view of bitsets.
This tests more than one function. It tests all of the functions needed in order to retrieve an SafeArray populated with Strings.
Package bitset implements bitsets, a mapping between non-negative integers and boolean values. It should be more efficient than map[uint] bool. It provides methods for setting, clearing, flipping, and testing individual integers. But it also provides set intersection, union, difference, complement, and symmetric operations, as well as tests to check whether any, all, or no bits are set, and querying a bitset's current length and number of positive bits. BitSets are expanded to the size of the largest set bit; the memory allocation is approximately Max bits, where Max is the largest set bit. BitSets are never shrunk. On creation, a hint can be given for the number of bits that will be used. Many of the methods, including Set,Clear, and Flip, return a BitSet pointer, which allows for chaining. Example use: As an alternative to BitSets, one should check out the 'big' package, which provides a (less set-theoretical) view of bitsets.
Package secp256k1 implements optimized secp256k1 elliptic curve operations in pure Go. This package provides an optimized pure Go implementation of elliptic curve cryptography operations over the secp256k1 curve as well as data structures and functions for working with public and private secp256k1 keys. See https://www.secg.org/sec2-v2.pdf for details on the standard. In addition, sub packages are provided to produce, verify, parse, and serialize ECDSA signatures and EC-Schnorr-DCRv0 (a custom Schnorr-based signature scheme specific to Decred) signatures. See the README.md files in the relevant sub packages for more details about those aspects. An overview of the features provided by this package are as follows: It also provides an implementation of the Go standard library crypto/elliptic Curve interface via the S256 function so that it may be used with other packages in the standard library such as crypto/tls, crypto/x509, and crypto/ecdsa. However, in the case of ECDSA, it is highly recommended to use the ecdsa sub package of this package instead since it is optimized specifically for secp256k1 and is significantly faster as a result. Although this package was primarily written for dcrd, it has intentionally been designed so it can be used as a standalone package for any projects needing to use optimized secp256k1 elliptic curve cryptography. Finally, a comprehensive suite of tests is provided to provide a high level of quality assurance. At the time of this writing, the primary public key cryptography in widespread use on the Decred network used to secure coins is based on elliptic curves defined by the secp256k1 domain parameters. This example demonstrates use of GenerateSharedSecret to encrypt a message for a recipient's public key, and subsequently decrypt the message using the recipient's private key.
Package dockertest contains helper functions for setting up and tearing down docker containers to aid in testing. dockertest supports spinning up MySQL, PostgreSQL and MongoDB out of the box. Dockertest provides two environment variables
Package semver provides the ability to work with Semantic Versions (http://semver.org) in Go. Specifically it provides the ability to: There are two functions that can parse semantic versions. The `StrictNewVersion` function only parses valid version 2 semantic versions as outlined in the specification. The `NewVersion` function attempts to coerce a version into a semantic version and parse it. For example, if there is a leading v or a version listed without all 3 parts (e.g. 1.2) it will attempt to coerce it into a valid semantic version (e.g., 1.2.0). In both cases a `Version` object is returned that can be sorted, compared, and used in constraints. When parsing a version an optional error can be returned if there is an issue parsing the version. For example, The version object has methods to get the parts of the version, compare it to other versions, convert the version back into a string, and get the original string. For more details please see the documentation at https://godoc.org/github.com/Masterminds/semver. A set of versions can be sorted using the `sort` package from the standard library. For example, There are two methods for comparing versions. One uses comparison methods on `Version` instances and the other is using Constraints. There are some important differences to notes between these two methods of comparison. There are differences between the two methods or checking versions because the comparison methods on `Version` follow the specification while comparison ranges are not part of the specification. Different packages and tools have taken it upon themselves to come up with range rules. This has resulted in differences. For example, npm/js and Cargo/Rust follow similar patterns which PHP has a different pattern for ^. The comparison features in this package follow the npm/js and Cargo/Rust lead because applications using it have followed similar patters with their versions. Checking a version against version constraints is one of the most featureful parts of the package. There are two elements to the comparisons. First, a comparison string is a list of comma or space separated AND comparisons. These are then separated by || (OR) comparisons. For example, `">= 1.2 < 3.0.0 || >= 4.2.3"` is looking for a comparison that's greater than or equal to 1.2 and less than 3.0.0 or is greater than or equal to 4.2.3. This can also be written as `">= 1.2, < 3.0.0 || >= 4.2.3"` The basic comparisons are: There are multiple methods to handle ranges and the first is hyphens ranges. These look like: The `x`, `X`, and `*` characters can be used as a wildcard character. This works for all comparison operators. When used on the `=` operator it falls back to the tilde operation. For example, Tilde Range Comparisons (Patch) The tilde (`~`) comparison operator is for patch level ranges when a minor version is specified and major level changes when the minor number is missing. For example, Caret Range Comparisons (Major) The caret (`^`) comparison operator is for major level changes once a stable (1.0.0) release has occurred. Prior to a 1.0.0 release the minor versions acts as the API stability level. This is useful when comparisons of API versions as a major change is API breaking. For example, In addition to testing a version against a constraint, a version can be validated against a constraint. When validation fails a slice of errors containing why a version didn't meet the constraint is returned. For example,
Package check is a rich testing extension for Go's testing package. For details about the project, see:
Package docopt parses command-line arguments based on a help message. Given a conventional command-line help message, docopt processes the arguments. See https://github.com/docopt/docopt#help-message-format for a description of the help message format. This package exposes three different APIs, depending on the level of control required. The first, simplest way to parse your docopt usage is to just call: This will use os.Args[1:] as the argv slice, and use the default parser options. If you want to provide your own version string and args, then use: If the last parameter (version) is a non-empty string, it will be printed when --version is given in the argv slice. Finally, we can instantiate our own docopt.Parser which gives us control over how things like help messages are printed and whether to exit after displaying usage messages, etc. In particular, setting your own custom HelpHandler function makes unit testing your own docs with example command line invocations much more enjoyable. All three of these return a map of option names to the values parsed from argv, and an error or nil. You can get the values using the helpers, or just treat it as a regular map: Additionally, you can `Bind` these to a struct, assigning option values to the exported fields of that struct, all at once.
Package godog is the official Cucumber BDD framework for Golang, it merges specification and test documentation into one cohesive whole. Godog does not intervene with the standard "go test" command and it's behavior. You can leverage both frameworks to functionally test your application while maintaining all test related source code in *_test.go files. Godog acts similar compared to go test command. It uses go compiler and linker tool in order to produce test executable. Godog contexts needs to be exported same as Test functions for go test. For example, imagine you're about to create the famous UNIX ls command. Before you begin, you describe how the feature should work, see the example below.. Example: Now, wouldn't it be cool if something could read this sentence and use it to actually run a test against the ls command? Hey, that's exactly what this package does! As you'll see, Godog is easy to learn, quick to use, and will put the fun back into tests. Godog was inspired by Behat and Cucumber the above description is taken from it's documentation.
Package httpexpect helps with end-to-end HTTP and REST API testing. See example directory: There are two common ways to test API with httpexpect: The second approach works only if the server is a Go module and its handler can be imported in tests. Concrete behaviour is determined by Client implementation passed to Config struct. If you're using http.Client, set its Transport field (http.RoundTriper) to one of the following: Note that http handler can be usually obtained from http framework you're using. E.g., echo framework provides either http.Handler or fasthttp.RequestHandler. You can also provide your own implementation of RequestFactory (creates http.Request), or Client (gets http.Request and returns http.Response). If you're starting server from tests, it's very handy to use net/http/httptest. Whenever values are checked for equality in httpexpect, they are converted to "canonical form": This is equivalent to subsequently json.Marshal() and json.Unmarshal() the value and currently is implemented so. When some check fails, failure is reported. If non-fatal failures are used (see Reporter interface), execution is continued and instance that was checked is marked as failed. If specific instance is marked as failed, all subsequent checks are ignored for this instance and for any child instances retrieved after failure. Example:
Package lru implements generic least-recently-used caches with near O(1) perf. A least-recently-used (LRU) cache is a cache that holds a limited number of items with an eviction policy such that when the capacity of the cache is exceeded, the least-recently-used item is automatically removed when inserting a new item. The meaning of used in this implementation is either accessing the item via a lookup or adding the item into the cache, including when the item already exists. This package has intentionally been designed so it can be used as a standalone package for any projects needing to make use of a well-test least-recently-used cache with near O(1) performance characteristics for lookups, inserts, and deletions. This example demonstrates creating a new kv cache instance, inserting items into the cache, causing an eviction of the least-recently-used item, and removing an item. This example demonstrates creating a new cache instance, inserting items into the cache, causing an eviction of the least-recently-used item, and removing an item.
Package demotools provides a set of tools that help you write code examples. **FileSystem** The filesystem is used to abstract the os module and allow for variations to be swapped out, such as mocks for testing. Package demotools provides a set of tools that help you write code examples. **Pausable** The pausable interface creates an easy to mock pausing object for testing. Package demotools provides a set of tools that help you write code examples. **Questioner** The questioner is used in interactive examples to ask for input from the user at a command prompt, validate the answer, and ask the question again, if needed. It is exposed through an interface so that it can be mocked for unit testing. A pre-written mock is provided in the testtools package.
Package color is command line color library. Support rich color rendering output, universal API method, compatible with Windows system Source code and other details for the project are available at GitHub: More usage please see README and tests.
Package toml provides facilities for decoding and encoding TOML configuration files via reflection. There is also support for delaying decoding with the Primitive type, and querying the set of keys in a TOML document with the MetaData type. The specification implemented: https://github.com/toml-lang/toml The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify whether a file is a valid TOML document. It can also be used to print the type of each key in a TOML document. There are two important types of tests used for this package. The first is contained inside '*_test.go' files and uses the standard Go unit testing framework. These tests are primarily devoted to holistically testing the decoder and encoder. The second type of testing is used to verify the implementation's adherence to the TOML specification. These tests have been factored into their own project: https://github.com/BurntSushi/toml-test The reason the tests are in a separate project is so that they can be used by any implementation of TOML. Namely, it is language agnostic. Example StrictDecoding shows how to detect whether there are keys in the TOML document that weren't decoded into the value given. This is useful for returning an error to the user if they've included extraneous fields in their configuration. Example UnmarshalTOML shows how to implement a struct type that knows how to unmarshal itself. The struct must take full responsibility for mapping the values passed into the struct. The method may be used with interfaces in a struct in cases where the actual type is not known until the data is examined. Example Unmarshaler shows how to decode TOML strings into your own custom data type.
Package tableflip implements zero downtime upgrades. An upgrade spawns a new copy of argv[0] and passes file descriptors of used listening sockets to the new process. The old process exits once the new process signals readiness. Thus new code can use sockets allocated in the old process. This is similar to the approach used by nginx, but as a library. At any point in time there are one or two processes, with at most one of them in non-ready state. A successful upgrade fully replaces all old configuration and code. To use this library with systemd you need to use the PIDFile option in the service file. Then pass /path/to/pid-file to New. You can use systemd-run to test your implementation: systemd-run will print a unit name, which you can use with systemctl to inspect the service. NOTES: Requires at least Go 1.9, since there is a race condition on the pipes used for communication between parent and child. If you're seeing "can't start process: no such file or directory", you're probably using "go run main.go", for graceful reloads to work, you'll need use "go build main.go". Tableflip does not work on Windows, because Windows does not have the mechanisms required to support this method of graceful restarting. It is still possible to include this package in code that runs on Windows, which may be necessary in certain development circumstances, but it will not provide zero downtime upgrades when running on Windows. See the `testing` package for an example of how to use it. This shows how to use the upgrader with the graceful shutdown facilities of net/http. This shows how to use the Upgrader with a listener based service.
Package miniredis is a pure Go Redis test server, for use in Go unittests. There are no dependencies on system binaries, and every server you start will be empty. Start a server with `s, err := miniredis.Run()`. Stop it with `defer s.Close()`. Point your Redis client to `s.Addr()` or `s.Host(), s.Port()`. Set keys directly via s.Set(...) and similar commands, or use a Redis client. For direct use you can select a Redis database with either `s.Select(12); s.Get("foo")` or `s.DB(12).Get("foo")`.
Package csrf (gorilla/csrf) provides Cross Site Request Forgery (CSRF) prevention middleware for Go web applications & services. It includes: * The `csrf.Protect` middleware/handler provides CSRF protection on routes attached to a router or a sub-router. * A `csrf.Token` function that provides the token to pass into your response, whether that be a HTML form or a JSON response body. * ... and a `csrf.TemplateField` helper that you can pass into your `html/template` templates to replace a `{{ .csrfField }}` template tag with a hidden input field. gorilla/csrf is easy to use: add the middleware to individual handlers with the below: ... and then collect the token with `csrf.Token(r)` before passing it to the template, JSON body or HTTP header (you pick!). gorilla/csrf inspects the form body (first) and HTTP headers (second) on subsequent POST/PUT/PATCH/DELETE/etc. requests for the token. Note that the authentication key passed to `csrf.Protect([]byte(key))` should be 32-bytes long and persist across application restarts. Generating a random key won't allow you to authenticate existing cookies and will break your CSRF validation. Here's the common use-case: HTML forms you want to provide CSRF protection for, in order to protect malicious POST requests being made: Note that the CSRF middleware will (by necessity) consume the request body if the token is passed via POST form values. If you need to consume this in your handler, insert your own middleware earlier in the chain to capture the request body. You can also send the CSRF token in the response header. This approach is useful if you're using a front-end JavaScript framework like Ember or Angular, or are providing a JSON API: If you're writing a client that's supposed to mimic browser behavior, make sure to send back the CSRF cookie (the default name is _gorilla_csrf, but this can be changed with the CookieName Option) along with either the X-CSRF-Token header or the gorilla.csrf.Token form field. In addition: getting CSRF protection right is important, so here's some background: * This library generates unique-per-request (masked) tokens as a mitigation against the BREACH attack (http://breachattack.com/). * The 'base' (unmasked) token is stored in the session, which means that multiple browser tabs won't cause a user problems as their per-request token is compared with the base token. * Operates on a "whitelist only" approach where safe (non-mutating) HTTP methods (GET, HEAD, OPTIONS, TRACE) are the *only* methods where token validation is not enforced. * The design is based on the battle-tested Django (https://docs.djangoproject.com/en/1.8/ref/csrf/) and Ruby on Rails (http://api.rubyonrails.org/classes/ActionController/RequestForgeryProtection.html) approaches. * Cookies are authenticated and based on the securecookie (https://github.com/gorilla/securecookie) library. They're also Secure (issued over HTTPS only) and are HttpOnly by default, because sane defaults are important. * Go's `crypto/rand` library is used to generate the 32 byte (256 bit) tokens and the one-time-pad used for masking them. This library does not seek to be adventurous.
Package selenium provides a client to drive web browser-based automation and testing. See the example below for how to get started with this API. This package can depend on several binaries being available, depending on which browsers will be used and how. To avoid needing to manage these dependencies, use a cloud-based browser testing environment, like Sauce Labs, BrowserStack or similar. Otherwise, use the methods provided by this API to specify the paths to the dependencies, which will have to be downloaded separately. This example shows how to navigate to a http://play.golang.org page, input a short program, run it, and inspect its output. If you want to actually run this example:
Package godog is the official Cucumber BDD framework for Golang, it merges specification and test documentation into one cohesive whole. Godog does not intervene with the standard "go test" command and it's behavior. You can leverage both frameworks to functionally test your application while maintaining all test related source code in *_test.go files. Godog acts similar compared to go test command. It uses go compiler and linker tool in order to produce test executable. Godog contexts needs to be exported same as Test functions for go test. For example, imagine you’re about to create the famous UNIX ls command. Before you begin, you describe how the feature should work, see the example below.. Example: Now, wouldn’t it be cool if something could read this sentence and use it to actually run a test against the ls command? Hey, that’s exactly what this package does! As you’ll see, Godog is easy to learn, quick to use, and will put the fun back into tests. Godog was inspired by Behat and Cucumber the above description is taken from it's documentation.
Package validator implements value validations based on struct tags. In code it is often necessary to validate that a given value is valid before using it for something. A typical example might be something like this. This is a simple enough example, but it can get significantly more complex, especially when dealing with structs. You get the idea. Package validator allows one to define valid values as struct tags when defining a new struct type. Then validating a variable of type NewUserRequest becomes trivial. Here is the list of validator functions builtin in the package. Note that there are no tests to prevent conflicting validator parameters. For instance, these fields will never be valid. It is possible to define custom validation functions by using SetValidationFunc. First, one needs to create a validation function. Then one needs to add it to the list of validation funcs and give it a "tag" name. Then it is possible to use the notzz validation tag. This will print "Field A error: value cannot be ZZ" To use parameters, it is very similar. And then the code below should print "Field A error: value cannot be ABC". As well, it is possible to overwrite builtin validation functions. And you can delete a validation function by setting it to nil. Using a non-existing validation func in a field tag will always return false and with error validate.ErrUnknownTag. Finally, package validator also provides a helper function that can be used to validate simple variables/values. In case there is a reason why one would not wish to use tag 'validate' (maybe due to a conflict with a different package), it is possible to tell the package to use a different tag. Then. SetTag is permanent. The new tag name will be used until it is again changed with a new call to SetTag. A way to temporarily use a different tag exists. You may often need to have a different set of validation rules for different situations. In all the examples above, we only used the default validator but you could create a new one and set specific rules for it. For instance, you might use the same struct to decode incoming JSON for a REST API but your needs will change when you're using it to, say, create a new instance in storage vs. when you need to change something. Maybe when creating a new user, you need to make sure all values in the struct are filled, but then you use the same struct to handle incoming requests to, say, change the password, in which case you only need the Username and the Password and don't care for the others. You might use two different validators. It is also possible to do all of that using only the default validator as long as SetTag is always called before calling validator.Validate() or you chain the with WithTag().
Package codepipeline provides the API client, operations, and parameter types for AWS CodePipeline. This is the CodePipeline API Reference. This guide provides descriptions of the actions and data types for CodePipeline. Some functionality for your pipeline can only be configured through the API. For more information, see the CodePipeline User Guide. You can use the CodePipeline API to work with pipelines, stages, actions, and transitions. Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of stages, actions, and transitions. You can work with pipelines by calling: CreatePipeline DeletePipeline GetPipeline GetPipelineExecution GetPipelineState ListActionExecutions ListPipelines ListPipelineExecutions StartPipelineExecution StopPipelineExecution UpdatePipeline Pipelines include stages. Each stage contains one or more actions that must complete before the next stage begins. A stage results in success or failure. If a stage fails, the pipeline stops at that stage and remains stopped until either a new version of an artifact appears in the source location, or a user takes action to rerun the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, see CodePipeline Pipeline Structure Reference. Pipeline stages include actions that are categorized into categories such as source or build actions performed in a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipelineand GetPipelineState. Valid action categories are: Source Build Test Deploy Approval Invoke Compute Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete. You can work with transitions by calling: DisableStageTransition EnableStageTransition For third-party integrators or developers who want to create their own integrations with CodePipeline, the expected sequence varies from the standard API user. To integrate with CodePipeline, developers need to work with the following items: Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source. You can work with jobs by calling: AcknowledgeJob GetJobDetails PollForJobs PutJobFailureResult PutJobSuccessResult Third party jobs, which are instances of an action created by a partner action and integrated into CodePipeline. Partner actions are created by members of the Amazon Web Services Partner Network. You can work with third party jobs by calling: AcknowledgeThirdPartyJob GetThirdPartyJobDetails PollForThirdPartyJobs PutThirdPartyJobFailureResult PutThirdPartyJobSuccessResult
Package appconfig provides the API client, operations, and parameter types for Amazon AppConfig. AppConfig feature flags and dynamic configurations help software builders quickly and securely adjust application behavior in production environments without full code deployments. AppConfig speeds up software release frequency, improves application resiliency, and helps you address emergent issues more quickly. With feature flags, you can gradually release new capabilities to users and measure the impact of those changes before fully deploying the new capabilities to all users. With operational flags and dynamic configurations, you can update block lists, allow lists, throttling limits, logging verbosity, and perform other operational tuning to quickly respond to issues in production environments. AppConfig is a capability of Amazon Web Services Systems Manager. Despite the fact that application configuration content can vary greatly from application to application, AppConfig supports the following use cases, which cover a broad spectrum of customer needs: Feature flags and toggles - Safely release new capabilities to your customers in a controlled environment. Instantly roll back changes if you experience a problem. Application tuning - Carefully introduce application changes while testing the impact of those changes with users in production environments. Allow list or block list - Control access to premium features or instantly block specific users without deploying new code. Centralized configuration storage - Keep your configuration data organized and consistent across all of your workloads. You can use AppConfig to deploy configuration data stored in the AppConfig hosted configuration store, Secrets Manager, Systems Manager, Parameter Store, or Amazon S3. This section provides a high-level description of how AppConfig works and how you get started. 1. Identify configuration values in code you want to manage in the cloud Before you start creating AppConfig artifacts, we recommend you identify configuration data in your code that you want to dynamically manage using AppConfig. Good examples include feature flags or toggles, allow and block lists, logging verbosity, service limits, and throttling rules, to name a few. If your configuration data already exists in the cloud, you can take advantage of AppConfig validation, deployment, and extension features to further streamline configuration data management. 2. Create an application namespace To create a namespace, you create an AppConfig artifact called an application. An application is simply an organizational construct like a folder. 3. Create environments For each AppConfig application, you define one or more environments. An environment is a logical grouping of targets, such as applications in a Beta or Production environment, Lambda functions, or containers. You can also define environments for application subcomponents, such as the Web , Mobile , and Back-end . You can configure Amazon CloudWatch alarms for each environment. The system monitors alarms during a configuration deployment. If an alarm is triggered, the system rolls back the configuration. 4. Create a configuration profile A configuration profile includes, among other things, a URI that enables AppConfig to locate your configuration data in its stored location and a profile type. AppConfig supports two configuration profile types: feature flags and freeform configurations. Feature flag configuration profiles store their data in the AppConfig hosted configuration store and the URI is simply hosted . For freeform configuration profiles, you can store your data in the AppConfig hosted configuration store or any Amazon Web Services service that integrates with AppConfig, as described in Creating a free form configuration profilein the the AppConfig User Guide. A configuration profile can also include optional validators to ensure your configuration data is syntactically and semantically correct. AppConfig performs a check using the validators when you start a deployment. If any errors are detected, the deployment rolls back to the previous configuration data. 5. Deploy configuration data When you create a new deployment, you specify the following: An application ID A configuration profile ID A configuration version An environment ID where you want to deploy the configuration data A deployment strategy ID that defines how fast you want the changes to take effect When you call the StartDeployment API action, AppConfig performs the following tasks: Retrieves the configuration data from the underlying data store by using the location URI in the configuration profile. Verifies the configuration data is syntactically and semantically correct by using the validators you specified when you created your configuration profile. Caches a copy of the data so it is ready to be retrieved by your application. This cached copy is called the deployed data. 6. Retrieve the configuration You can configure AppConfig Agent as a local host and have the agent poll AppConfig for configuration updates. The agent calls the StartConfigurationSessionand GetLatestConfiguration API actions and caches your configuration data locally. To retrieve the data, your application makes an HTTP call to the localhost server. AppConfig Agent supports several use cases, as described in Simplified retrieval methodsin the the AppConfig User Guide. If AppConfig Agent isn't supported for your use case, you can configure your application to poll AppConfig for configuration updates by directly calling the StartConfigurationSession and GetLatestConfigurationAPI actions. This reference is intended to be used with the AppConfig User Guide.
Package httpexpect helps with end-to-end HTTP and REST API testing. See example directory: There are two common ways to test API with httpexpect: The second approach works only if the server is a Go module and its handler can be imported in tests. Concrete behaviour is determined by Client implementation passed to Config struct. If you're using http.Client, set its Transport field (http.RoundTriper) to one of the following: Note that http handler can be usually obtained from http framework you're using. E.g., echo framework provides either http.Handler or fasthttp.RequestHandler. You can also provide your own implementation of RequestFactory (creates http.Request), or Client (gets http.Request and returns http.Response). If you're starting server from tests, it's very handy to use net/http/httptest. Whenever values are checked for equality in httpexpect, they are converted to "canonical form": This is equivalent to subsequently json.Marshal() and json.Unmarshal() the value and currently is implemented so. When some check fails, failure is reported. If non-fatal failures are used (see Reporter interface), execution is continued and instance that was checked is marked as failed. If specific instance is marked as failed, all subsequent checks are ignored for this instance and for any child instances retrieved after failure. Example: If you want to be informed about every asserion made, successful or failed, you can use AssertionHandler interface. Default implementation of this interface ignores successful assertions and reports failed assertions using Formatter and Reporter objects. Custom AssertionHandler can handle all assertions (e.g. dump them in JSON format) and is free to use or not to use Formatter and Reporter in its sole discretion.