Package handlers is a collection of handlers (aka "HTTP middleware") for use with Go's net/http package (or any framework supporting http.Handler). The package includes handlers for logging in standardised formats, compressing HTTP responses, validating content types and other useful tools for manipulating requests and responses.
Gnostic is a tool for building better REST APIs through knowledge. Gnostic reads declarative descriptions of REST APIs that conform to the OpenAPI Specification, reports errors, resolves internal dependencies, and puts the results in a binary form that can be used in any language that is supported by the Protocol Buffer tools. Gnostic models are validated and typed. This allows API tool developers to focus on their product and not worry about input validation and type checking. Gnostic calls plugins that implement a variety of API implementation and support features including generation of client and server support code.
Gnostic is a tool for building better REST APIs through knowledge. Gnostic reads declarative descriptions of REST APIs that conform to the OpenAPI Specification, reports errors, resolves internal dependencies, and puts the results in a binary form that can be used in any language that is supported by the Protocol Buffer tools. Gnostic models are validated and typed. This allows API tool developers to focus on their product and not worry about input validation and type checking. Gnostic calls plugins that implement a variety of API implementation and support features including generation of client and server support code.
Package update provides functionality to implement secure, self-updating Go programs (or other single-file targets). For complete updating solutions please see Equinox (https://equinox.io) and go-tuf (https://github.com/flynn/go-tuf). This example shows how to update a program remotely from a URL. Go binaries can often be large. It can be advantageous to only ship a binary patch to a client instead of the complete program text of a new version. This example shows how to update a program with a bsdiff binary patch. Other patch formats may be applied by implementing the Patcher interface. Updating executable code on a computer can be a dangerous operation unless you take the appropriate steps to guarantee the authenticity of the new code. While checksum verification is important, it should always be combined with signature verification (next section) to guarantee that the code came from a trusted party. go-update validates SHA256 checksums by default, but this is pluggable via the Hash property on the Options struct. This example shows how to guarantee that the newly-updated binary is verified to have an appropriate checksum (that was otherwise retrived via a secure channel) specified as a hex string. Cryptographic verification of new code from an update is an extremely important way to guarantee the security and integrity of your updates. Verification is performed by validating the signature of a hash of the new file. This means nothing changes if you apply your update with a patch. This example shows how to add signature verification to your updates. To make all of this work an application distributor must first create a public/private key pair and embed the public key into their application. When they issue a new release, the issuer must sign the new executable file with the private key and distribute the signature along with the update. In order to update a Go application with go-update, you must distributed it as a single executable. This is often easy, but some applications require static assets (like HTML and CSS asset files or TLS certificates). In order to update applications like these, you'll want to make sure to embed those asset files into the distributed binary with a tool like go-bindata (my favorite): https://github.com/jteeuwen/go-bindata Mechanisms and protocols for determining whether an update should be applied and, if so, which one are out of scope for this package. Please consult go-tuf (https://github.com/flynn/go-tuf) or Equinox (https://equinox.io) for more complete solutions. go-update only works for self-updating applications that are distributed as a single binary, i.e. applications that do not have additional assets or dependency files. Updating application that are distributed as mutliple on-disk files is out of scope, although this may change in future versions of this library.
Package otp implements both HOTP and TOTP based one time passcodes in a Google Authenticator compatible manner. When adding a TOTP for a user, you must store the "secret" value persistently. It is recommended to store the secret in an encrypted field in your datastore. Due to how TOTP works, it is not possible to store a hash for the secret value like you would a password. To enroll a user, you must first generate an OTP for them. Google Authenticator supports using a QR code as an enrollment method: Validating a TOTP passcode is very easy, just prompt the user for a passcode and retrieve the associated user's previously stored secret.
Package bloom provides data structures and methods for creating Bloom filters. A Bloom filter is a representation of a set of _n_ items, where the main requirement is to make membership queries; _i.e._, whether an item is a member of a set. A Bloom filter has two parameters: _m_, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and _k_, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo _m_). Set membership is done by _testing_ whether the bits at each value of the hashing functions (again, modulo _m_) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose _k_ and _m_ correctly. In this implementation, the hashing functions used is murmurhash, a non-cryptographic hashing function. This implementation accepts keys for setting as testing as []byte. Thus, to add a string item, "Love": Similarly, to test if "Love" is in bloom: For numeric data, I recommend that you look into the binary/encoding library. But, for example, to add a uint32 to the filter: Finally, there is a method to estimate the false positive rate of a Bloom filter with _m_ bits and _k_ hashing functions for a set of size _n_: You can use it to validate the computed m, k parameters: or You would expect ActualfpRate to be close to the desired fp in these cases. The EstimateFalsePositiveRate function creates a temporary Bloom filter. It is also relatively expensive and only meant for validation.
Package govalidator is package of validators and sanitizers for strings, structs and collections.
Package blackfriday is a Markdown processor. It translates plain text with simple formatting rules into HTML or LaTeX. Blackfriday includes an algorithm for creating sanitized anchor names corresponding to a given input text. This algorithm is used to create anchors for headings when EXTENSION_AUTO_HEADER_IDS is enabled. The algorithm is specified below, so that other packages can create compatible anchor names and links to those anchors. The algorithm iterates over the input text, interpreted as UTF-8, one Unicode code point (rune) at a time. All runes that are letters (category L) or numbers (category N) are considered valid characters. They are mapped to lower case, and included in the output. All other runes are considered invalid characters. Invalid characters that preceed the first valid character, as well as invalid character that follow the last valid character are dropped completely. All other sequences of invalid characters between two valid characters are replaced with a single dash character '-'. SanitizedAnchorName exposes this functionality, and can be used to create compatible links to the anchor names generated by blackfriday. This algorithm is also implemented in a small standalone package at github.com/shurcooL/sanitized_anchor_name. It can be useful for clients that want a small package and don't need full functionality of blackfriday.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field and Cross Struct validation for nested structs. Validate A simple example usage: The error can be used like so Both StructErrors and FieldError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. NOTE2: comma is the default separator of validation tags, if you wish to have a comma included within the parameter i.e. excludesall=, you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package accessanalyzer provides the API client, operations, and parameter types for Access Analyzer. Identity and Access Management Access Analyzer helps you to set, verify, and refine your IAM policies by providing a suite of capabilities. Its features include findings for external and unused access, basic and custom policy checks for validating policies, and policy generation to generate fine-grained policies. To start using IAM Access Analyzer to identify external or unused access, you first need to create an analyzer. External access analyzers help identify potential risks of accessing resources by enabling you to identify any resource policies that grant access to an external principal. It does this by using logic-based reasoning to analyze resource-based policies in your Amazon Web Services environment. An external principal can be another Amazon Web Services account, a root user, an IAM user or role, a federated user, an Amazon Web Services service, or an anonymous user. You can also use IAM Access Analyzer to preview public and cross-account access to your resources before deploying permissions changes. Unused access analyzers help identify potential identity access risks by enabling you to identify unused IAM roles, unused access keys, unused console passwords, and IAM principals with unused service and action-level permissions. Beyond findings, IAM Access Analyzer provides basic and custom policy checks to validate IAM policies before deploying permissions changes. You can use policy generation to refine permissions by attaching a policy generated using access activity logged in CloudTrail logs. This guide describes the IAM Access Analyzer operations that you can call programmatically. For general information about IAM Access Analyzer, see Identity and Access Management Access Analyzerin the IAM User Guide.
Package semver provides the ability to work with Semantic Versions (http://semver.org) in Go. Specifically it provides the ability to: There are two functions that can parse semantic versions. The `StrictNewVersion` function only parses valid version 2 semantic versions as outlined in the specification. The `NewVersion` function attempts to coerce a version into a semantic version and parse it. For example, if there is a leading v or a version listed without all 3 parts (e.g. 1.2) it will attempt to coerce it into a valid semantic version (e.g., 1.2.0). In both cases a `Version` object is returned that can be sorted, compared, and used in constraints. When parsing a version an optional error can be returned if there is an issue parsing the version. For example, The version object has methods to get the parts of the version, compare it to other versions, convert the version back into a string, and get the original string. For more details please see the documentation at https://godoc.org/github.com/Masterminds/semver. A set of versions can be sorted using the `sort` package from the standard library. For example, There are two methods for comparing versions. One uses comparison methods on `Version` instances and the other is using Constraints. There are some important differences to notes between these two methods of comparison. There are differences between the two methods or checking versions because the comparison methods on `Version` follow the specification while comparison ranges are not part of the specification. Different packages and tools have taken it upon themselves to come up with range rules. This has resulted in differences. For example, npm/js and Cargo/Rust follow similar patterns which PHP has a different pattern for ^. The comparison features in this package follow the npm/js and Cargo/Rust lead because applications using it have followed similar patters with their versions. Checking a version against version constraints is one of the most featureful parts of the package. There are two elements to the comparisons. First, a comparison string is a list of comma or space separated AND comparisons. These are then separated by || (OR) comparisons. For example, `">= 1.2 < 3.0.0 || >= 4.2.3"` is looking for a comparison that's greater than or equal to 1.2 and less than 3.0.0 or is greater than or equal to 4.2.3. This can also be written as `">= 1.2, < 3.0.0 || >= 4.2.3"` The basic comparisons are: There are multiple methods to handle ranges and the first is hyphens ranges. These look like: The `x`, `X`, and `*` characters can be used as a wildcard character. This works for all comparison operators. When used on the `=` operator it falls back to the tilde operation. For example, Tilde Range Comparisons (Patch) The tilde (`~`) comparison operator is for patch level ranges when a minor version is specified and major level changes when the minor number is missing. For example, Caret Range Comparisons (Major) The caret (`^`) comparison operator is for major level changes once a stable (1.0.0) release has occurred. Prior to a 1.0.0 release the minor versions acts as the API stability level. This is useful when comparisons of API versions as a major change is API breaking. For example, In addition to testing a version against a constraint, a version can be validated against a constraint. When validation fails a slice of errors containing why a version didn't meet the constraint is returned. For example,
Package securecookie encodes and decodes authenticated and optionally encrypted cookie values. Secure cookies can't be forged, because their values are validated using HMAC. When encrypted, the content is also inaccessible to malicious eyes. To use it, first create a new SecureCookie instance: The hashKey is required, used to authenticate the cookie value using HMAC. It is recommended to use a key with 32 or 64 bytes. The blockKey is optional, used to encrypt the cookie value -- set it to nil to not use encryption. If set, the length must correspond to the block size of the encryption algorithm. For AES, used by default, valid lengths are 16, 24, or 32 bytes to select AES-128, AES-192, or AES-256. Strong keys can be created using the convenience function GenerateRandomKey(). Once a SecureCookie instance is set, use it to encode a cookie value: Later, use the same SecureCookie instance to decode and validate a cookie value: We stored a map[string]string, but secure cookies can hold any value that can be encoded using encoding/gob. To store custom types, they must be registered first using gob.Register(). For basic types this is not needed; it works out of the box.
Package demotools provides a set of tools that help you write code examples. **FileSystem** The filesystem is used to abstract the os module and allow for variations to be swapped out, such as mocks for testing. Package demotools provides a set of tools that help you write code examples. **Pausable** The pausable interface creates an easy to mock pausing object for testing. Package demotools provides a set of tools that help you write code examples. **Questioner** The questioner is used in interactive examples to ask for input from the user at a command prompt, validate the answer, and ask the question again, if needed. It is exposed through an interface so that it can be mocked for unit testing. A pre-written mock is provided in the testtools package.
Package toml provides facilities for decoding and encoding TOML configuration files via reflection. There is also support for delaying decoding with the Primitive type, and querying the set of keys in a TOML document with the MetaData type. The specification implemented: https://github.com/toml-lang/toml The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify whether a file is a valid TOML document. It can also be used to print the type of each key in a TOML document. There are two important types of tests used for this package. The first is contained inside '*_test.go' files and uses the standard Go unit testing framework. These tests are primarily devoted to holistically testing the decoder and encoder. The second type of testing is used to verify the implementation's adherence to the TOML specification. These tests have been factored into their own project: https://github.com/BurntSushi/toml-test The reason the tests are in a separate project is so that they can be used by any implementation of TOML. Namely, it is language agnostic. Example StrictDecoding shows how to detect whether there are keys in the TOML document that weren't decoded into the value given. This is useful for returning an error to the user if they've included extraneous fields in their configuration. Example UnmarshalTOML shows how to implement a struct type that knows how to unmarshal itself. The struct must take full responsibility for mapping the values passed into the struct. The method may be used with interfaces in a struct in cases where the actual type is not known until the data is examined. Example Unmarshaler shows how to decode TOML strings into your own custom data type.
Package memguard implements a secure software enclave for the storage of sensitive information in memory. There are two main container objects exposed in this API. Enclave objects encrypt data and store the ciphertext whereas LockedBuffers are more like guarded memory allocations. There is a limit on the maximum number of LockedBuffer objects that can exist at any one time, imposed by the system's mlock limits. There is no limit on Enclaves. The general workflow is to store sensitive information in Enclaves when it is not immediately needed and decrypt it when and where it is. After use, the LockedBuffer should be destroyed. If you need access to the data inside a LockedBuffer in a type not covered by any methods provided by this API, you can type-cast the allocation's memory to whatever type you want. This is of course an unsafe operation and so care must be taken to ensure that the cast is valid and does not result in memory unsafety. Further examples of code and interesting use-cases can be found in the examples subpackage. Several functions exist to make the mass purging of data very easy. It is recommended to make use of them when appropriate. Core dumps are disabled by default. If you absolutely require them, you can enable them by using unix.Setrlimit to set RLIMIT_CORE to an appropriate value.
Package csrf (gorilla/csrf) provides Cross Site Request Forgery (CSRF) prevention middleware for Go web applications & services. It includes: * The `csrf.Protect` middleware/handler provides CSRF protection on routes attached to a router or a sub-router. * A `csrf.Token` function that provides the token to pass into your response, whether that be a HTML form or a JSON response body. * ... and a `csrf.TemplateField` helper that you can pass into your `html/template` templates to replace a `{{ .csrfField }}` template tag with a hidden input field. gorilla/csrf is easy to use: add the middleware to individual handlers with the below: ... and then collect the token with `csrf.Token(r)` before passing it to the template, JSON body or HTTP header (you pick!). gorilla/csrf inspects the form body (first) and HTTP headers (second) on subsequent POST/PUT/PATCH/DELETE/etc. requests for the token. Note that the authentication key passed to `csrf.Protect([]byte(key))` should be 32-bytes long and persist across application restarts. Generating a random key won't allow you to authenticate existing cookies and will break your CSRF validation. Here's the common use-case: HTML forms you want to provide CSRF protection for, in order to protect malicious POST requests being made: Note that the CSRF middleware will (by necessity) consume the request body if the token is passed via POST form values. If you need to consume this in your handler, insert your own middleware earlier in the chain to capture the request body. You can also send the CSRF token in the response header. This approach is useful if you're using a front-end JavaScript framework like Ember or Angular, or are providing a JSON API: If you're writing a client that's supposed to mimic browser behavior, make sure to send back the CSRF cookie (the default name is _gorilla_csrf, but this can be changed with the CookieName Option) along with either the X-CSRF-Token header or the gorilla.csrf.Token form field. In addition: getting CSRF protection right is important, so here's some background: * This library generates unique-per-request (masked) tokens as a mitigation against the BREACH attack (http://breachattack.com/). * The 'base' (unmasked) token is stored in the session, which means that multiple browser tabs won't cause a user problems as their per-request token is compared with the base token. * Operates on a "whitelist only" approach where safe (non-mutating) HTTP methods (GET, HEAD, OPTIONS, TRACE) are the *only* methods where token validation is not enforced. * The design is based on the battle-tested Django (https://docs.djangoproject.com/en/1.8/ref/csrf/) and Ruby on Rails (http://api.rubyonrails.org/classes/ActionController/RequestForgeryProtection.html) approaches. * Cookies are authenticated and based on the securecookie (https://github.com/gorilla/securecookie) library. They're also Secure (issued over HTTPS only) and are HttpOnly by default, because sane defaults are important. * Go's `crypto/rand` library is used to generate the 32 byte (256 bit) tokens and the one-time-pad used for masking them. This library does not seek to be adventurous.
Package logr defines a general-purpose logging API and abstract interfaces to back that API. Packages in the Go ecosystem can depend on this package, while callers can implement logging with whatever backend is appropriate. Logging is done using a Logger instance. Logger is a concrete type with methods, which defers the actual logging to a LogSink interface. The main methods of Logger are Info() and Error(). Arguments to Info() and Error() are key/value pairs rather than printf-style formatted strings, emphasizing "structured logging". With Go's standard log package, we might write: With logr's structured logging, we'd write: Errors are much the same. Instead of: We'd write: Info() and Error() are very similar, but they are separate methods so that LogSink implementations can choose to do things like attach additional information (such as stack traces) on calls to Error(). Error() messages are always logged, regardless of the current verbosity. If there is no error instance available, passing nil is valid. Often we want to log information only when the application in "verbose mode". To write log lines that are more verbose, Logger has a V() method. The higher the V-level of a log line, the less critical it is considered. Log-lines with V-levels that are not enabled (as per the LogSink) will not be written. Level V(0) is the default, and logger.V(0).Info() has the same meaning as logger.Info(). Negative V-levels have the same meaning as V(0). Error messages do not have a verbosity level and are always logged. Where we might have written: We can write: Logger instances can have name strings so that all messages logged through that instance have additional context. For example, you might want to add a subsystem name: The WithName() method returns a new Logger, which can be passed to constructors or other functions for further use. Repeated use of WithName() will accumulate name "segments". These name segments will be joined in some way by the LogSink implementation. It is strongly recommended that name segments contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Logger instances can store any number of key/value pairs, which will be logged alongside all messages logged through that instance. For example, you might want to create a Logger instance per managed object: With the standard log package, we might write: With logr we'd write: Logger has very few hard rules, with the goal that LogSink implementations might have a lot of freedom to differentiate. There are, however, some things to consider. The log message consists of a constant message attached to the log line. This should generally be a simple description of what's occurring, and should never be a format string. Variable information can then be attached using named values. Keys are arbitrary strings, but should generally be constant values. Values may be any Go value, but how the value is formatted is determined by the LogSink implementation. Logger instances are meant to be passed around by value. Code that receives such a value can call its methods without having to check whether the instance is ready for use. The zero logger (= Logger{}) is identical to Discard() and discards all log entries. Code that receives a Logger by value can simply call it, the methods will never crash. For cases where passing a logger is optional, a pointer to Logger should be used. Keys are not strictly required to conform to any specification or regex, but it is recommended that they: These guidelines help ensure that log data is processed properly regardless of the log implementation. For example, log implementations will try to output JSON data or will store data for later database (e.g. SQL) queries. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when necessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values). Implementations may choose to give callers access to the underlying logging implementation. The recommended pattern for this is: Logger grants access to the sink to enable type assertions like this: Custom `With*` functions can be implemented by copying the complete Logger struct and replacing the sink in the copy: Don't use New to construct a new Logger with a LogSink retrieved from an existing Logger. Source code attribution might not work correctly and unexported fields in Logger get lost. Beware that the same LogSink instance may be shared by different logger instances. Calling functions that modify the LogSink will affect all of those.
Package arg parses command line arguments using the fields from a struct. For example, defines two command line arguments, which can be set using any of The fastest way to see how to use go-arg is to read the examples below. Fields can be bool, string, any float type, or any signed or unsigned integer type. They can also be slices of any of the above, or slices of pointers to any of the above. Tags can be specified using the `arg` and `help` tag names: Any tag string that starts with a single hyphen is the short form for an argument (e.g. `./example -d`), and any tag string that starts with two hyphens is the long form for the argument (instead of the field name). Other valid tag strings are `positional` and `required`. Fields can be excluded from processing with `arg:"-"`. This example demonstrates basic usage This example demonstrates arguments that have default values This example shows the error string generated by go-arg when an invalid option is provided This example shows the error string generated by go-arg when an invalid option is provided This example shows the usage string generated by go-arg with customized placeholders This example shows the usage string generated by go-arg This example shows the usage string generated by go-arg when using subcommands This example shows the usage string generated by go-arg when using subcommands This example demonstrates arguments with keys and values separated by commas This example demonstrates arguments with keys and values This eample demonstrates multiple value arguments that can be mixed with other arguments. This example demonstrates arguments that have multiple values This example demonstrates positional arguments This example demonstrates arguments that are required This example demonstrates use of subcommands This example shows how to print help for an explicit subcommand This example shows how to print help for a subcommand that is nested several levels deep
Package validator implements value validations based on struct tags. In code it is often necessary to validate that a given value is valid before using it for something. A typical example might be something like this. This is a simple enough example, but it can get significantly more complex, especially when dealing with structs. You get the idea. Package validator allows one to define valid values as struct tags when defining a new struct type. Then validating a variable of type NewUserRequest becomes trivial. Here is the list of validator functions builtin in the package. Note that there are no tests to prevent conflicting validator parameters. For instance, these fields will never be valid. It is possible to define custom validation functions by using SetValidationFunc. First, one needs to create a validation function. Then one needs to add it to the list of validation funcs and give it a "tag" name. Then it is possible to use the notzz validation tag. This will print "Field A error: value cannot be ZZ" To use parameters, it is very similar. And then the code below should print "Field A error: value cannot be ABC". As well, it is possible to overwrite builtin validation functions. And you can delete a validation function by setting it to nil. Using a non-existing validation func in a field tag will always return false and with error validate.ErrUnknownTag. Finally, package validator also provides a helper function that can be used to validate simple variables/values. In case there is a reason why one would not wish to use tag 'validate' (maybe due to a conflict with a different package), it is possible to tell the package to use a different tag. Then. SetTag is permanent. The new tag name will be used until it is again changed with a new call to SetTag. A way to temporarily use a different tag exists. You may often need to have a different set of validation rules for different situations. In all the examples above, we only used the default validator but you could create a new one and set specific rules for it. For instance, you might use the same struct to decode incoming JSON for a REST API but your needs will change when you're using it to, say, create a new instance in storage vs. when you need to change something. Maybe when creating a new user, you need to make sure all values in the struct are filled, but then you use the same struct to handle incoming requests to, say, change the password, in which case you only need the Username and the Password and don't care for the others. You might use two different validators. It is also possible to do all of that using only the default validator as long as SetTag is always called before calling validator.Validate() or you chain the with WithTag().
Package codepipeline provides the API client, operations, and parameter types for AWS CodePipeline. This is the CodePipeline API Reference. This guide provides descriptions of the actions and data types for CodePipeline. Some functionality for your pipeline can only be configured through the API. For more information, see the CodePipeline User Guide. You can use the CodePipeline API to work with pipelines, stages, actions, and transitions. Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of stages, actions, and transitions. You can work with pipelines by calling: CreatePipeline DeletePipeline GetPipeline GetPipelineExecution GetPipelineState ListActionExecutions ListPipelines ListPipelineExecutions StartPipelineExecution StopPipelineExecution UpdatePipeline Pipelines include stages. Each stage contains one or more actions that must complete before the next stage begins. A stage results in success or failure. If a stage fails, the pipeline stops at that stage and remains stopped until either a new version of an artifact appears in the source location, or a user takes action to rerun the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, see CodePipeline Pipeline Structure Reference. Pipeline stages include actions that are categorized into categories such as source or build actions performed in a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipelineand GetPipelineState. Valid action categories are: Source Build Test Deploy Approval Invoke Compute Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete. You can work with transitions by calling: DisableStageTransition EnableStageTransition For third-party integrators or developers who want to create their own integrations with CodePipeline, the expected sequence varies from the standard API user. To integrate with CodePipeline, developers need to work with the following items: Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source. You can work with jobs by calling: AcknowledgeJob GetJobDetails PollForJobs PutJobFailureResult PutJobSuccessResult Third party jobs, which are instances of an action created by a partner action and integrated into CodePipeline. Partner actions are created by members of the Amazon Web Services Partner Network. You can work with third party jobs by calling: AcknowledgeThirdPartyJob GetThirdPartyJobDetails PollForThirdPartyJobs PutThirdPartyJobFailureResult PutThirdPartyJobSuccessResult
Package appconfig provides the API client, operations, and parameter types for Amazon AppConfig. AppConfig feature flags and dynamic configurations help software builders quickly and securely adjust application behavior in production environments without full code deployments. AppConfig speeds up software release frequency, improves application resiliency, and helps you address emergent issues more quickly. With feature flags, you can gradually release new capabilities to users and measure the impact of those changes before fully deploying the new capabilities to all users. With operational flags and dynamic configurations, you can update block lists, allow lists, throttling limits, logging verbosity, and perform other operational tuning to quickly respond to issues in production environments. AppConfig is a capability of Amazon Web Services Systems Manager. Despite the fact that application configuration content can vary greatly from application to application, AppConfig supports the following use cases, which cover a broad spectrum of customer needs: Feature flags and toggles - Safely release new capabilities to your customers in a controlled environment. Instantly roll back changes if you experience a problem. Application tuning - Carefully introduce application changes while testing the impact of those changes with users in production environments. Allow list or block list - Control access to premium features or instantly block specific users without deploying new code. Centralized configuration storage - Keep your configuration data organized and consistent across all of your workloads. You can use AppConfig to deploy configuration data stored in the AppConfig hosted configuration store, Secrets Manager, Systems Manager, Parameter Store, or Amazon S3. This section provides a high-level description of how AppConfig works and how you get started. 1. Identify configuration values in code you want to manage in the cloud Before you start creating AppConfig artifacts, we recommend you identify configuration data in your code that you want to dynamically manage using AppConfig. Good examples include feature flags or toggles, allow and block lists, logging verbosity, service limits, and throttling rules, to name a few. If your configuration data already exists in the cloud, you can take advantage of AppConfig validation, deployment, and extension features to further streamline configuration data management. 2. Create an application namespace To create a namespace, you create an AppConfig artifact called an application. An application is simply an organizational construct like a folder. 3. Create environments For each AppConfig application, you define one or more environments. An environment is a logical grouping of targets, such as applications in a Beta or Production environment, Lambda functions, or containers. You can also define environments for application subcomponents, such as the Web , Mobile , and Back-end . You can configure Amazon CloudWatch alarms for each environment. The system monitors alarms during a configuration deployment. If an alarm is triggered, the system rolls back the configuration. 4. Create a configuration profile A configuration profile includes, among other things, a URI that enables AppConfig to locate your configuration data in its stored location and a profile type. AppConfig supports two configuration profile types: feature flags and freeform configurations. Feature flag configuration profiles store their data in the AppConfig hosted configuration store and the URI is simply hosted . For freeform configuration profiles, you can store your data in the AppConfig hosted configuration store or any Amazon Web Services service that integrates with AppConfig, as described in Creating a free form configuration profilein the the AppConfig User Guide. A configuration profile can also include optional validators to ensure your configuration data is syntactically and semantically correct. AppConfig performs a check using the validators when you start a deployment. If any errors are detected, the deployment rolls back to the previous configuration data. 5. Deploy configuration data When you create a new deployment, you specify the following: An application ID A configuration profile ID A configuration version An environment ID where you want to deploy the configuration data A deployment strategy ID that defines how fast you want the changes to take effect When you call the StartDeployment API action, AppConfig performs the following tasks: Retrieves the configuration data from the underlying data store by using the location URI in the configuration profile. Verifies the configuration data is syntactically and semantically correct by using the validators you specified when you created your configuration profile. Caches a copy of the data so it is ready to be retrieved by your application. This cached copy is called the deployed data. 6. Retrieve the configuration You can configure AppConfig Agent as a local host and have the agent poll AppConfig for configuration updates. The agent calls the StartConfigurationSessionand GetLatestConfiguration API actions and caches your configuration data locally. To retrieve the data, your application makes an HTTP call to the localhost server. AppConfig Agent supports several use cases, as described in Simplified retrieval methodsin the the AppConfig User Guide. If AppConfig Agent isn't supported for your use case, you can configure your application to poll AppConfig for configuration updates by directly calling the StartConfigurationSession and GetLatestConfigurationAPI actions. This reference is intended to be used with the AppConfig User Guide.