Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package cloudflare implements the Cloudflare v4 API. File contains helper methods for accepting variants (pointers, values, slices, etc) of a particular type and returning them in another. A common use is pointer to values and back. _Most_ follow the convention of (where <type> is a Golang type such as Bool): <type>Ptr: Accepts a value and returns a pointer. <type>: Accepts a pointer and returns a value. <type>PtrSlice: Accepts a slice of values and returns a slice of pointers. <type>Slice: Accepts a slice of pointers and returns a slice of values. <type>PtrMap: Accepts a string map of values into a string map of pointers. <type>Map: Accepts a string map of pointers into a string map of values. Not all Golang types are covered here, only those that are commonly used.
Package sns provides the API client, operations, and parameter types for Amazon Simple Notification Service. Amazon Simple Notification Service (Amazon SNS) is a web service that enables you to build distributed web-enabled applications. Applications can use Amazon SNS to easily push real-time notification messages to interested subscribers over multiple delivery protocols. For more information about this product see the Amazon SNS product page. For detailed information about Amazon SNS features and their associated API calls, see the Amazon SNS Developer Guide. For information on the permissions you need to use this API, see Identity and access management in Amazon SNS in the Amazon SNS Developer Guide. We also provide SDKs that enable you to access Amazon SNS from your preferred programming language. The SDKs contain functionality that automatically takes care of tasks such as: cryptographically signing your service requests, retrying requests, and handling error responses. For a list of available SDKs, go to Tools for Amazon Web Services.
Package spanprocessor contains logic to modify top level settings of a span, such as its name.
Package dig provides an opinionated way of resolving object dependencies. STABLE. No breaking changes will be made in this major version. Dig exposes type Container as an object capable of resolving a directed acyclic dependency graph. Use the New function to create one. Constructors for different types are added to the container by using the Provide method. A constructor can declare a dependency on another type by simply adding it as a function parameter. Dependencies for a type can be added to the graph both, before and after the type was added. Multiple constructors can rely on the same type. The container creates a singleton for each retained type, instantiating it at most once when requested directly or as a dependency of another type. Constructors can declare any number of dependencies as parameters and optionally, return errors. Constructors can also return multiple results to add multiple types to the container. Constructors that accept a variadic number of arguments are treated as if they don't have those arguments. That is, Is treated the same as, The constructor will be called with all other dependencies and no variadic arguments. Types added to the container may be consumed by using the Invoke method. Invoke accepts any function that accepts one or more parameters and optionally, returns an error. Dig calls the function with the requested type, instantiating only those types that were requested by the function. The call fails if any type or its dependencies (both direct and transitive) were not available in the container. Any error returned by the invoked function is propagated back to the caller. Constructors declare their dependencies as function parameters. This can very quickly become unreadable if the constructor has a lot of dependencies. A pattern employed to improve readability in a situation like this is to create a struct that lists all the parameters of the function as fields and changing the function to accept that struct instead. This is referred to as a parameter object. Dig has first class support for parameter objects: any struct embedding dig.In gets treated as a parameter object. The following is equivalent to the constructor above. Handlers can receive any combination of parameter objects and parameters. Result objects are the flip side of parameter objects. These are structs that represent multiple outputs from a single function as fields in the struct. Structs embedding dig.Out get treated as result objects. The above is equivalent to, Constructors often don't have a hard dependency on some types and are able to operate in a degraded state when that dependency is missing. Dig supports declaring dependencies as optional by adding an `optional:"true"` tag to fields of a dig.In struct. Fields in a dig.In structs that have the `optional:"true"` tag are treated as optional by Dig. If an optional field is not available in the container, the constructor will receive a zero value for the field. Constructors that declare dependencies as optional MUST handle the case of those dependencies being absent. The optional tag also allows adding new dependencies without breaking existing consumers of the constructor. Some use cases call for multiple values of the same type. Dig allows adding multiple values of the same type to the container with the use of Named Values. Named Values can be produced by passing the dig.Name option when a constructor is provided. All values produced by that constructor will have the given name. Given the following constructors, You can provide *sql.DB into a Container under different names by passing the dig.Name option. Alternatively, you can produce a dig.Out struct and tag its fields with `name:".."` to have the corresponding value added to the graph under the specified name. Regardless of how a Named Value was produced, it can be consumed by another constructor by accepting a dig.In struct which has exported fields with the same name AND type that you provided. The name tag may be combined with the optional tag to declare the dependency optional. Added in Dig 1.2. Dig provides value groups to allow producing and consuming many values of the same type. Value groups allow constructors to send values to a named, unordered collection in the container. Other constructors can request all values in this collection as a slice. Constructors can send values into value groups by returning a dig.Out struct tagged with `group:".."`. Any number of constructors may provide values to this named collection. Other constructors can request all values for this collection by requesting a slice tagged with `group:".."`. This will execute all constructors that provide a value to that group in an unspecified order. Note that values in a value group are unordered. Dig makes no guarantees about the order in which these values will be produced. Value groups can be used to provide multiple values for a group from a dig.Out using slices, however considering groups are retrieved by requesting a slice this implies that the values must be retrieved using a slice of slices. As of dig v1.9.0, if you want to provide individual elements to the group instead of the slice itself, you can add the `flatten` modifier to the group from a dig.Out.
Package bluemonday provides a way of describing an allowlist of HTML elements and attributes as a policy, and for that policy to be applied to untrusted strings from users that may contain markup. All elements and attributes not on the allowlist will be stripped. The default bluemonday.UGCPolicy().Sanitize() turns this: Into the more harmless: And it turns this: Into this: Whilst still allowing this: To pass through mostly unaltered (it gained a rel="nofollow"): The primary purpose of bluemonday is to take potentially unsafe user generated content (from things like Markdown, HTML WYSIWYG tools, etc) and make it safe for you to put on your website. It protects sites against XSS (http://en.wikipedia.org/wiki/Cross-site_scripting) and other malicious content that a user interface may deliver. There are many vectors for an XSS attack (https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet) and the safest thing to do is to sanitize user input against a known safe list of HTML elements and attributes. Note: You should always run bluemonday after any other processing. If you use blackfriday (https://github.com/russross/blackfriday) or Pandoc (http://johnmacfarlane.net/pandoc/) then bluemonday should be run after these steps. This ensures that no insecure HTML is introduced later in your process. bluemonday is heavily inspired by both the OWASP Java HTML Sanitizer (https://code.google.com/p/owasp-java-html-sanitizer/) and the HTML Purifier (http://htmlpurifier.org/). We ship two default policies, one is bluemonday.StrictPolicy() and can be thought of as equivalent to stripping all HTML elements and their attributes as it has nothing on its allowlist. The other is bluemonday.UGCPolicy() and allows a broad selection of HTML elements and attributes that are safe for user generated content. Note that this policy does not allow iframes, object, embed, styles, script, etc. The essence of building a policy is to determine which HTML elements and attributes are considered safe for your scenario. OWASP provide an XSS prevention cheat sheet ( https://www.google.com/search?q=xss+prevention+cheat+sheet ) to help explain the risks, but essentially:
Package qrcode implements a QR Code encoder. A QR Code is a matrix (two-dimensional) barcode. Arbitrary content may be encoded. A QR Code contains error recovery information to aid reading damaged or obscured codes. There are four levels of error recovery: qrcode.{Low, Medium, High, Highest}. QR Codes with a higher recovery level are more robust to damage, at the cost of being physically larger. Three functions cover most use cases: - Create a PNG image: - Create a PNG image and write to a file: - Create a PNG image with custom colors and write to file: All examples use the qrcode.Medium error Recovery Level and create a fixed 256x256px size QR Code. The last function creates a white on black instead of black on white QR Code. To generate a variable sized image instead, specify a negative size (in place of the 256 above), such as -4 or -5. Larger negative numbers create larger images: A size of -5 sets each module (QR Code "pixel") to be 5px wide/high. - Create a PNG image (variable size, with minimum white padding) and write to a file: The maximum capacity of a QR Code varies according to the content encoded and the error recovery level. The maximum capacity is 2,953 bytes, 4,296 alphanumeric characters, 7,089 numeric digits, or a combination of these. This package implements a subset of QR Code 2005, as defined in ISO/IEC 18004:2006.
test.go is a "Go script" for running Vitess tests. It runs each test in its own Docker container for hermeticity and (potentially) parallelism. If a test fails, this script will save the output in _test/ and continue with other tests. Before using it, you should have Docker 1.5+ installed, and have your user in the group that lets you run the docker command without sudo. The first time you run against a given flavor, it may take some time for the corresponding bootstrap image (vitess/bootstrap:<flavor>) to be downloaded. It is meant to be run from the Vitess root, like so: For a list of options, run:
Package kubebuilder contains pkged files compiled into the go binaries.
Package gophercloud provides a multi-vendor interface to OpenStack-compatible clouds. The library has a three-level hierarchy: providers, services, and resources. Provider structs represent the cloud providers that offer and manage a collection of services. You will generally want to create one Provider client per OpenStack cloud. Use your OpenStack credentials to create a Provider client. The IdentityEndpoint is typically refered to as "auth_url" or "OS_AUTH_URL" in information provided by the cloud operator. Additionally, the cloud may refer to TenantID or TenantName as project_id and project_name. Credentials are specified like so: You can authenticate with a token by doing: You may also use the openstack.AuthOptionsFromEnv() helper function. This function reads in standard environment variables frequently found in an OpenStack `openrc` file. Again note that Gophercloud currently uses "tenant" instead of "project". Service structs are specific to a provider and handle all of the logic and operations for a particular OpenStack service. Examples of services include: Compute, Object Storage, Block Storage. In order to define one, you need to pass in the parent provider, like so: Resource structs are the domain models that services make use of in order to work with and represent the state of API resources: Intermediate Result structs are returned for API operations, which allow generic access to the HTTP headers, response body, and any errors associated with the network transaction. To turn a result into a usable resource struct, you must call the Extract method which is chained to the response, or an Extract function from an applicable extension: All requests that enumerate a collection return a Pager struct that is used to iterate through the results one page at a time. Use the EachPage method on that Pager to handle each successive Page in a closure, then use the appropriate extraction method from that request's package to interpret that Page as a slice of results: If you want to obtain the entire collection of pages without doing any intermediary processing on each page, you can use the AllPages method: This top-level package contains utility functions and data types that are used throughout the provider and service packages. Of particular note for end users are the AuthOptions and EndpointOpts structs. An example retry backoff function, which respects the 429 HTTP response code and a "Retry-After" header:
Package lambda provides the API client, operations, and parameter types for AWS Lambda. Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. With Lambda, you can run code for virtually any type of application or backend service. For more information about the Lambda service, see What is Lambdain the Lambda Developer Guide. The Lambda API Reference provides information about each of the API methods, including details about the parameters in each API request and response. You can use Software Development Kits (SDKs), Integrated Development Environment (IDE) Toolkits, and command line tools to access the API. For installation instructions, see Tools for Amazon Web Services. For a list of Region-specific endpoints that Lambda supports, see Lambda endpoints and quotas in the Amazon Web Services General Reference.. When making the API calls, you will need to authenticate your request by providing a signature. Lambda supports signature version 4. For more information, see Signature Version 4 signing processin the Amazon Web Services General Reference.. Because Amazon Web Services SDKs use the CA certificates from your computer, changes to the certificates on the Amazon Web Services servers can cause connection failures when you attempt to use an SDK. You can prevent these failures by keeping your computer's CA certificates and operating system up-to-date. If you encounter this issue in a corporate environment and do not manage your own computer, you might need to ask an administrator to assist with the update process. The following list shows minimum operating system and Java versions: Microsoft Windows versions that have updates from January 2005 or later installed contain at least one of the required CAs in their trust list. Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5 (February 2007), Mac OS X 10.5 (October 2007), and later versions contain at least one of the required CAs in their trust list. Red Hat Enterprise Linux 5 (March 2007), 6, and 7 and CentOS 5, 6, and 7 all contain at least one of the required CAs in their default trusted CA list. Java 1.4.2_12 (May 2006), 5 Update 2 (March 2005), and all later versions, including Java 6 (December 2006), 7, and 8, contain at least one of the required CAs in their default trusted CA list. When accessing the Lambda management console or Lambda API endpoints, whether through browsers or programmatically, you will need to ensure your client machines support any of the following CAs: Amazon Root CA 1 Starfield Services Root Certificate Authority - G2 Starfield Class 2 Certification Authority Root certificates from the first two authorities are available from Amazon trust services, but keeping your computer up-to-date is the more straightforward solution. To learn more about ACM-provided certificates, see Amazon Web Services Certificate Manager FAQs.
Package pubsublite provides an easy way to publish and receive messages using the Pub/Sub Lite service. Google Pub/Sub services are designed to provide reliable, many-to-many, asynchronous messaging between applications. Publisher applications can send messages to a topic and other applications can subscribe to that topic to receive the messages. By decoupling senders and receivers, Google Pub/Sub allows developers to communicate between independently written applications. Compared to Cloud Pub/Sub, Pub/Sub Lite provides partitioned data storage with predefined throughput and storage capacity. Guidance on how to choose between Cloud Pub/Sub and Pub/Sub Lite is available at https://cloud.google.com/pubsub/docs/choosing-pubsub-or-lite. More information about Pub/Sub Lite is available at https://cloud.google.com/pubsub/lite. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Examples can be found at https://pkg.go.dev/cloud.google.com/go/pubsublite#pkg-examples and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#pkg-examples. Complete sample programs can be found at https://github.com/GoogleCloudPlatform/golang-samples/tree/master/pubsublite. The cloud.google.com/go/pubsublite/pscompat subpackage contains clients for publishing and receiving messages, which have similar interfaces to their pubsub.Topic and pubsub.Subscription counterparts in cloud.google.com/go/pubsub. The following examples demonstrate how to declare common interfaces: https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewPublisherClient-Interface and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewSubscriberClient-Interface. The following imports are required for code snippets below: Messages are published to topics. Pub/Sub Lite topics may be created like so: Close must be called to release resources when an AdminClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/topics for more information about how Pub/Sub Lite topics are configured. See https://cloud.google.com/pubsub/lite/docs/locations for the list of locations where Pub/Sub Lite is available. Pub/Sub Lite uses gRPC streams extensively for high throughput. For more differences, see https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat. To publish messages to a topic, first create a PublisherClient: Then call Publish: Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub Lite service. Thresholds for batching can be configured in PublishSettings. Publish returns a PublishResult, which behaves like a future; its Get method blocks until the message has been sent (or has failed to be sent) to the service: Once you've finishing publishing all messages, call Stop to flush all messages to the service and close gRPC streams. The PublisherClient can no longer be used after it has been stopped or has terminated due to a permanent error. PublisherClients are expected to be long-lived and used for the duration of the application, rather than for publishing small batches of messages. Stop must be called to release resources when a PublisherClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/publishing for more information about publishing. To receive messages published to a topic, create a subscription to the topic. There may be more than one subscription per topic; each message that is published to the topic will be delivered to all of its subscriptions. Pub/Sub Lite subscriptions may be created like so: See https://cloud.google.com/pubsub/lite/docs/subscriptions for more information about how subscriptions are configured. To receive messages for a subscription, first create a SubscriberClient: Messages are then consumed from a subscription via callback. The callback may be invoked concurrently by multiple goroutines (one per partition that the subscriber client is connected to). Receive blocks until either the context is canceled or a permanent error occurs. To terminate a call to Receive, cancel its context: Clients must call pubsub.Message.Ack() or pubsub.Message.Nack() for every message received. Pub/Sub Lite does not have ACK deadlines. Pub/Sub Lite also does not actually have the concept of NACK. The default behavior terminates the SubscriberClient. In Pub/Sub Lite, only a single subscriber for a given subscription is connected to any partition at a time, and there is no other client that may be able to handle messages. See https://cloud.google.com/pubsub/lite/docs/subscribing for more information about receiving messages. Pub/Sub Lite utilizes gRPC streams extensively. gRPC allows a maximum of 100 streams per connection. Internally, the library uses a default connection pool size of 8, which supports up to 800 topic partitions. To alter the connection pool size, pass a ClientOption to pscompat.NewPublisherClient and pscompat.NewSubscriberClient:
Package rds provides the API client, operations, and parameter types for Amazon Relational Database Service. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, Db2, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference For the alphabetical list of API actions, see API Actions. For the alphabetical list of data types, see Data Types. For a list of common query parameters, see Common Parameters. For descriptions of the error codes, see Common Errors. Amazon RDS User Guide For a summary of the Amazon RDS interfaces, see Available RDS Interfaces. For more information about how to use the Query API, see Using the Query API.
Package duration contains routines to parse standard units of time.
This is a repository containing Go bindings for writing FUSE file systems. Go to https://godoc.org/github.com/hanwen/go-fuse/fs for the in-depth documentation for this library. Older, deprecated APIs are available at https://godoc.org/github.com/hanwen/go-fuse/fuse/pathfs and https://godoc.org/github.com/hanwen/go-fuse/fuse/nodefs.
Package redsync provides a Redis-based distributed mutual exclusion lock implementation as described in the post http://redis.io/topics/distlock. Values containing the types defined in this package should not be copied.
Package seelog implements logging functionality with flexible dispatching, filtering, and formatting. To create a logger, use one of the following constructors: Example: The "defer" line is important because if you are using asynchronous logger behavior, without this line you may end up losing some messages when you close your application because they are processed in another non-blocking goroutine. To avoid that you explicitly defer flushing all messages before closing. Logger created using one of the LoggerFrom* funcs can be used directly by calling one of the main log funcs. Example: Having loggers as variables is convenient if you are writing your own package with internal logging or if you have several loggers with different options. But for most standalone apps it is more convenient to use package level funcs and vars. There is a package level var 'Current' made for it. You can replace it with another logger using 'ReplaceLogger' and then use package level funcs: Last lines do the same as In this example the 'Current' logger was replaced using a 'ReplaceLogger' call and became equal to 'logger' variable created from config. This way you are able to use package level funcs instead of passing the logger variable. Main seelog point is to configure logger via config files and not the code. The configuration is read by LoggerFrom* funcs. These funcs read xml configuration from different sources and try to create a logger using it. All the configuration features are covered in detail in the official wiki: https://github.com/cihub/seelog/wiki. There are many sections covering different aspects of seelog, but the most important for understanding configs are: After you understand these concepts, check the 'Reference' section on the main wiki page to get the up-to-date list of dispatchers, receivers, formats, and logger types. Here is an example config with all these features: This config represents a logger with adaptive timeout between log messages (check logger types reference) which logs to console, all.log, and errors.log depending on the log level. Its output formats also depend on log level. This logger will only use log level 'debug' and higher (minlevel is set) for all files with names that don't start with 'test'. For files starting with 'test' this logger prohibits all levels below 'error'. Although configuration using code is not recommended, it is sometimes needed and it is possible to do with seelog. Basically, what you need to do to get started is to create constraints, exceptions and a dispatcher tree (same as with config). Most of the New* functions in this package are used to provide such capabilities. Here is an example of configuration in code, that demonstrates an async loop logger that logs to a simple split dispatcher with a console receiver using a specified format and is filtered using a top-level min-max constraints and one expection for the 'main.go' file. So, this is basically a demonstration of configuration of most of the features: To learn seelog features faster you should check the examples package: https://github.com/cihub/seelog-examples It contains many example configs and usecases.
Package gotests contains the core logic for generating table-driven tests.
Package fuse enables writing FUSE file systems on Linux and FreeBSD. There are two approaches to writing a FUSE file system. The first is to speak the low-level message protocol, reading from a Conn using ReadRequest and writing using the various Respond methods. This approach is closest to the actual interaction with the kernel and can be the simplest one in contexts such as protocol translators. Servers of synthesized file systems tend to share common bookkeeping abstracted away by the second approach, which is to call fs.Serve to serve the FUSE protocol using an implementation of the service methods in the interfaces FS* (file system), Node* (file or directory), and Handle* (opened file or directory). There are a daunting number of such methods that can be written, but few are required. The specific methods are described in the documentation for those interfaces. The examples/hellofs subdirectory contains a simple illustration of the fs.Serve approach. The required and optional methods for the FS, Node, and Handle interfaces have the general form where Op is the name of a FUSE operation. Op reads request parameters from req and writes results to resp. An operation whose only result is the error result omits the resp parameter. Multiple goroutines may call service methods simultaneously; the methods being called are responsible for appropriate synchronization. The operation must not hold on to the request or response, including any []byte fields such as WriteRequest.Data or SetxattrRequest.Xattr. Operations can return errors. The FUSE interface can only communicate POSIX errno error numbers to file system clients, the message is not visible to file system clients. The returned error can implement ErrorNumber to control the errno returned. Without ErrorNumber, a generic errno (EIO) is returned. Error messages will be visible in the debug log as part of the response. In some file systems, some operations may take an undetermined amount of time. For example, a Read waiting for a network message or a matching Write might wait indefinitely. If the request is cancelled and no longer needed, the context will be cancelled. Blocking operations should select on a receive from ctx.Done() and attempt to abort the operation early if the receive succeeds (meaning the channel is closed). To indicate that the operation failed because it was aborted, return syscall.EINTR. If an operation does not block for an indefinite amount of time, supporting cancellation is not necessary. All requests types embed a Header, meaning that the method can inspect req.Pid, req.Uid, and req.Gid as necessary to implement permission checking. The kernel FUSE layer normally prevents other users from accessing the FUSE file system (to change this, see AllowOther), but does not enforce access modes (to change this, see DefaultPermissions). Behavior and metadata of the mounted file system can be changed by passing MountOption values to Mount.
Package ecs provides the API client, operations, and parameter types for Amazon EC2 Container Service. Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service. It makes it easy to run, stop, and manage Docker containers. You can host your cluster on a serverless infrastructure that's managed by Amazon ECS by launching your services or tasks on Fargate. For more control, you can host your tasks on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) or External (on-premises) instances that you manage. Amazon ECS makes it easy to launch and stop container-based applications with simple API calls. This makes it easy to get the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features. You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements. With Amazon ECS, you don't need to operate your own cluster management and configuration management systems. You also don't need to worry about scaling your management infrastructure.
Package contrib is a collection of extensions for the opentelemetry-go project. It provides 3rd party resource detectors, propagators, samplers, bridges, and instrumentation as submodules. Package contrib contains common values used across all instrumentation, exporter, and detector contributions.
Package kstatus contains libraries for computing status of kubernetes resources. Status Get status and/or conditions for resources based on resources already read from a cluster, i.e. it will not fetch resources from a cluster. Wait Get status and/or conditions for resources by fetching them from a cluster. This supports specifying a set of resources as an Inventory or as a list of manifests/unstructureds. This also supports polling the state of resources until they all reach a specific status. A common use case for this can be to wait for a set of resources to all finish reconciling after an apply.
Package automaxprocs automatically sets GOMAXPROCS to match the Linux container CPU quota, if any.
Package elasticsearchexporter contains an opentelemetry-collector exporter for Elasticsearch.
Package negroni is an idiomatic approach to web middleware in Go. It is tiny, non-intrusive, and encourages use of net/http Handlers. If you like the idea of Martini, but you think it contains too much magic, then Negroni is a great fit. For a full guide visit http://github.com/urfave/negroni
Package redsync provides a Redis-based distributed mutual exclusion lock implementation as described in the post http://redis.io/topics/distlock. Values containing the types defined in this package should not be copied.
Package cloud contains a library and tools for open cloud development in Go. The Go Cloud Project allows application developers to seamlessly deploy cloud applications on any combination of cloud providers. It does this by providing stable, idiomatic interfaces for common uses like storage and databases. Think `database/sql` for cloud products. At the core of the project are common types implemented by cloud providers. For example, the blob.Bucket type can be created using gcsblob.OpenBucket, s3blob.OpenBucket, or any other provider. Then, the blob.Bucket can be used throughout your application without worrying about the underlying implementation. This project works well with a code generator called Wire (https://github.com/google/wire/blob/master/README.md). It creates human-readable code that only imports the cloud SDKs for providers you use. This allows Go Cloud to grow to support any number of cloud providers, without increasing compile times or binary sizes, and avoiding any side effects from `init()` functions. For sample applications and a tutorial, see the samples directory (https://github.com/google/go-cloud/tree/master/samples).
Package elasticexporter contains an opentelemetry-collector exporter for Elastic APM.
Package dbus implements bindings to the D-Bus message bus system. To use the message bus API, you first need to connect to a bus (usually the session or system bus). The acquired connection then can be used to call methods on remote objects and emit or receive signals. Using the Export method, you can arrange D-Bus methods calls to be directly translated to method calls on a Go value. For outgoing messages, Go types are automatically converted to the corresponding D-Bus types. See the official specification at https://dbus.freedesktop.org/doc/dbus-specification.html#type-system for more information on the D-Bus type system. The following types are directly encoded as their respective D-Bus equivalents: Slices and arrays encode as ARRAYs of their element type. Maps encode as DICTs, provided that their key type can be used as a key for a DICT. Structs other than Variant and Signature encode as a STRUCT containing their exported fields in order. Fields whose tags contain `dbus:"-"` and unexported fields will be skipped. Pointers encode as the value they're pointed to. Types convertible to one of the base types above will be mapped as the base type. Trying to encode any other type or a slice, map or struct containing an unsupported type will result in an InvalidTypeError. For incoming messages, the inverse of these rules are used, with the exception of STRUCTs. Incoming STRUCTS are represented as a slice of empty interfaces containing the struct fields in the correct order. The Store function can be used to convert such values to Go structs. Handling Unix file descriptors deserves special mention. To use them, you should first check that they are supported on a connection by calling SupportsUnixFDs. If it returns true, all method of Connection will translate messages containing UnixFD's to messages that are accompanied by the given file descriptors with the UnixFD values being substituted by the correct indices. Similarly, the indices of incoming messages are automatically resolved. It shouldn't be necessary to use UnixFDIndex.
Package redshift provides the API client, operations, and parameter types for Amazon Redshift. This is an interface reference for Amazon Redshift. It contains documentation for one of the programming or command line interfaces you can use to manage Amazon Redshift clusters. Note that Amazon Redshift is asynchronous, which means that some interfaces may require techniques, such as polling or asynchronous callback handlers, to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a change is applied immediately, on the next instance reboot, or during the next maintenance window. For a summary of the Amazon Redshift cluster management interfaces, go to Using the Amazon Redshift Management Interfaces. Amazon Redshift manages all the work of setting up, operating, and scaling a data warehouse: provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine. You can focus on using your data to acquire new insights for your business and customers. If you are a first-time user of Amazon Redshift, we recommend that you begin by reading the Amazon Redshift Getting Started Guide. If you are a database developer, the Amazon Redshift Database Developer Guide explains how to design, build, query, and maintain the databases that make up your data warehouse.
Package redis is a client for the Redis database. The Redigo FAQ (https://github.com/gomodule/redigo/wiki/FAQ) contains more documentation about this package. The Conn interface is the primary interface for working with Redis. Applications create connections by calling the Dial, DialWithTimeout or NewConn functions. In the future, functions will be added for creating sharded and other types of connections. The application must call the connection Close method when the application is done with the connection. The Conn interface has a generic method for executing Redis commands: The Redis command reference (http://redis.io/commands) lists the available commands. An example of using the Redis APPEND command is: The Do method converts command arguments to bulk strings for transmission to the server as follows: Redis command reply types are represented using the following Go types: Use type assertions or the reply helper functions to convert from interface{} to the specific Go type for the command result. Connections support pipelining using the Send, Flush and Receive methods. Send writes the command to the connection's output buffer. Flush flushes the connection's output buffer to the server. Receive reads a single reply from the server. The following example shows a simple pipeline. The Do method combines the functionality of the Send, Flush and Receive methods. The Do method starts by writing the command and flushing the output buffer. Next, the Do method receives all pending replies including the reply for the command just sent by Do. If any of the received replies is an error, then Do returns the error. If there are no errors, then Do returns the last reply. If the command argument to the Do method is "", then the Do method will flush the output buffer and receive pending replies without sending a command. Use the Send and Do methods to implement pipelined transactions. Connections support one concurrent caller to the Receive method and one concurrent caller to the Send and Flush methods. No other concurrency is supported including concurrent calls to the Do and Close methods. For full concurrent access to Redis, use the thread-safe Pool to get, use and release a connection from within a goroutine. Connections returned from a Pool have the concurrency restrictions described in the previous paragraph. Use the Send, Flush and Receive methods to implement Pub/Sub subscribers. The PubSubConn type wraps a Conn with convenience methods for implementing subscribers. The Subscribe, PSubscribe, Unsubscribe and PUnsubscribe methods send and flush a subscription management command. The receive method converts a pushed message to convenient types for use in a type switch. The Bool, Int, Bytes, String, Strings and Values functions convert a reply to a value of a specific type. To allow convenient wrapping of calls to the connection Do and Receive methods, the functions take a second argument of type error. If the error is non-nil, then the helper function returns the error. If the error is nil, the function converts the reply to the specified type: The Scan function converts elements of a array reply to Go types: Connection methods return error replies from the server as type redis.Error. Call the connection Err() method to determine if the connection encountered non-recoverable error such as a network error or protocol parsing error. If Err() returns a non-nil value, then the connection is not usable and should be closed. This example implements ZPOP as described at http://redis.io/topics/transactions using WATCH/MULTI/EXEC and scripting.
Package aztables can access an Azure Storage or CosmosDB account. The aztables package is capable of: The Azure Data Tables library allows you to interact with two types of resources: * the tables in your account * the entities within those tables. Interaction with these resources starts with an instance of a client. To create a client object, you will need the account's table service endpoint URL and a credential that allows you to access the account. The clients support different forms of authentication. The aztables library supports any of the `azcore.TokenCredential` interfaces, authorization via a Connection String, or authorization with a Shared Access Signature token. To use an account shared key (aka account key or access key), provide the key as a string. This can be found in your storage account in the Azure Portal under the "Access Keys" section. Use the key as the credential parameter to authenticate the client: Using a Connection String Depending on your use case and authorization method, you may prefer to initialize a client instance with a connection string instead of providing the account URL and credential separately. To do this, pass the connection string to the client's `from_connection_string` class method. The connection string can be found in your storage account in the [Azure Portal][azure_portal_account_url] under the "Access Keys" section or with the following Azure CLI command: Using a Shared Access Signature To use a shared access signature (SAS) token, provide the token at the end of your service URL. You can generate a SAS token from the Azure Portal under Shared Access Signature or use the ServiceClient.GetAccountSASToken or Client.GetTableSASToken() functions. Common uses of the Table service included: * Storing TBs of structured data capable of serving web scale applications * Storing datasets that do not require complex joins, foreign keys, or stored procedures and can be de-normalized for fast access * Quickly querying data using a clustered index * Accessing data using the OData protocol and LINQ filter expressions The following components make up the Azure Data Tables Service: * The account * A table within the account, which contains a set of entities * An entity within a table, as a dictionary The Azure Data Tables client library for Go allows you to interact with each of these components through the use of a dedicated client object. Two different clients are provided to interact with the various components of the Table Service: 1. **`ServiceClient`** - 2. **`Client`** - Entities are similar to rows. An entity has a PartitionKey, a RowKey, and a set of properties. A property is a name value pair, similar to a column. Every entity in a table does not need to have the same properties. Entities are returned as JSON, allowing developers to use JSON marshalling and unmarshalling techniques. Additionally, you can use the aztables.EDMEntity to ensure proper round-trip serialization of all properties. The following sections provide several code snippets covering some of the most common Table tasks, including: * Creating a table * Creating entities * Querying entities Create a table in your account and get a `Client` to perform operations on the newly created table: Creating Entities Querying entities
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package jwx contains tools that deal with the various JWx (JOSE) technologies such as JWT, JWS, JWE, etc in Go. Examples are stored in a separate Go module (to avoid adding dependencies to this module), and thus does not appear in the online documentation for this module. You can find the examples in Github at https://github.com/lestrrat-go/jwx/examples You can find more high level documentation at Github (https://github.com/lestrrat-go/jwx) FAQ style documentation can be found in the repository (https://github.com/lestrrat-go/jwx/tree/develop/v2/docs)
Package dbus implements bindings to the D-Bus message bus system. To use the message bus API, you first need to connect to a bus (usually the session or system bus). The acquired connection then can be used to call methods on remote objects and emit or receive signals. Using the Export method, you can arrange D-Bus methods calls to be directly translated to method calls on a Go value. For outgoing messages, Go types are automatically converted to the corresponding D-Bus types. The following types are directly encoded as their respective D-Bus equivalents: Slices and arrays encode as ARRAYs of their element type. Maps encode as DICTs, provided that their key type can be used as a key for a DICT. Structs other than Variant and Signature encode as a STRUCT containing their exported fields. Fields whose tags contain `dbus:"-"` and unexported fields will be skipped. Pointers encode as the value they're pointed to. Types convertible to one of the base types above will be mapped as the base type. Trying to encode any other type or a slice, map or struct containing an unsupported type will result in an InvalidTypeError. For incoming messages, the inverse of these rules are used, with the exception of STRUCTs. Incoming STRUCTS are represented as a slice of empty interfaces containing the struct fields in the correct order. The Store function can be used to convert such values to Go structs. Handling Unix file descriptors deserves special mention. To use them, you should first check that they are supported on a connection by calling SupportsUnixFDs. If it returns true, all method of Connection will translate messages containing UnixFD's to messages that are accompanied by the given file descriptors with the UnixFD values being substituted by the correct indices. Similarily, the indices of incoming messages are automatically resolved. It shouldn't be necessary to use UnixFDIndex.
Package jaeger contains an OpenTelemetry tracing exporter for Jaeger.
RBAC TODO: mention the required RBAC rules. TODO: example config. The processor supports running both in agent and collector mode. When running as an agent, the processor detects IP addresses of pods sending spans, metrics or logs to the agent and uses this information to extract metadata from pods. When running as an agent, it is important to apply a discovery filter so that the processor only discovers pods from the same host that it is running on. Not using such a filter can result in unnecessary resource usage especially on very large clusters. Once the filter is applied, each processor will only query the k8s API for pods running on it's own node. Node filter can be applied by setting the `filter.node` config option to the name of a k8s node. While this works as expected, it cannot be used to automatically filter pods by the same node that the processor is running on in most cases as it is not know before hand which node a pod will be scheduled on. Luckily, kubernetes has a solution for this called the downward API. To automatically filter pods by the node the processor is running on, you'll need to complete the following steps: 1. Use the downward API to inject the node name as an environment variable. Add the following snippet under the pod env section of the OpenTelemetry container. This will inject a new environment variable to the OpenTelemetry container with the value as the name of the node the pod was scheduled to run on. 2. Set "filter.node_from_env_var" to the name of the environment variable holding the node name. This will restrict each OpenTelemetry agent to query pods running on the same node only dramatically reducing resource requirements for very large clusters. The processor can be deployed both as an agent or as a collector. When running as a collector, the processor cannot correctly detect the IP address of the pods generating the telemetry data without any of the well-known IP attributes, when it receives them from an agent instead of receiving them directly from the pods. To workaround this issue, agents deployed with the k8s_tagger processor can be configured to detect the IP addresses and forward them along with the telemetry data resources. Collector can then match this IP address with k8s pods and enrich the records with the metadata. In order to set this up, you'll need to complete the following steps: 1. Setup agents in passthrough mode Configure the agents' k8s_tagger processors to run in passthrough mode. This will ensure that the agents detect the IP address as add it as an attribute to all telemetry resources. Agents will not make any k8s API calls, do any discovery of pods or extract any metadata. 2. Configure the collector as usual No special configuration changes are needed to be made on the collector. It'll automatically detect the IP address of spans, logs and metrics sent by the agents as well as directly by other services/pods. There are some edge-cases and scenarios where k8s_tagger will not work properly. The processor cannot correct identify pods running in the host network mode and enriching telemetry data generated by such pods is not supported at the moment, unless the attributes contain information about the source IP. The processor does not support detecting containers from the same pods when running as a sidecar. While this can be done, we think it is simpler to just use the kubernetes downward API to inject environment variables into the pods and directly use their values as tags.
Package trace encapsulates a module which contains the entirety of the trace-agent's processing pipeline. The code may be reused to process traces in the same way that the Datadog Agent does, but outside of it. Please note that the API is subject to major changes and should not be relied upon as being stable.
Package elasticsearchservice provides the API client, operations, and parameter types for Amazon Elasticsearch Service. Use the Amazon Elasticsearch Configuration API to create, configure, and manage Elasticsearch domains. For sample code that uses the Configuration API, see the Amazon Elasticsearch Service Developer Guide. The guide also contains sample code for sending signed HTTP requests to the Elasticsearch APIs. The endpoint for configuration service requests is region-specific: es.region.amazonaws.com. For example, es.us-east-1.amazonaws.com. For a current list of supported regions and endpoints, see Regions and Endpoints.
Package turn contains the public API for pion/turn, a toolkit for building TURN clients and servers
Package goavro is a library that encodes and decodes Avro data. Goavro provides methods to encode native Go data into both binary and textual JSON Avro data, and methods to decode both binary and textual JSON Avro data to native Go data. Goavro also provides methods to read and write Object Container File (OCF) formatted files, and the library contains example programs to read and write OCF files. Usage Example:
Package jsonapi provides a serializer and deserializer for jsonapi.org spec payloads. You can keep your model structs as is and use struct field tags to indicate to jsonapi how you want your response built or your request deserialized. What about my relationships? jsonapi supports relationships out of the box and will even side load them in your response into an "included" array--that contains associated objects. jsonapi uses StructField tags to annotate the structs fields that you already have and use in your app and then reads and writes jsonapi.org output based on the instructions you give the library in your jsonapi tags. Example structs using a Blog > Post > Comment structure, jsonapi Tag Reference Value, primary: "primary,<type field output>" This indicates that this is the primary key field for this struct type. Tag value arguments are comma separated. The first argument must be, "primary", and the second must be the name that should appear in the "type" field for all data objects that represent this type of model. Value, attr: "attr,<key name in attributes hash>[,<extra arguments>]" These fields' values should end up in the "attribute" hash for a record. The first argument must be, "attr', and the second should be the name for the key to display in the "attributes" hash for that record. The following extra arguments are also supported: "omitempty": excludes the fields value from the "attribute" hash. "iso8601": uses the ISO8601 timestamp format when serialising or deserialising the time.Time value. Value, relation: "relation,<key name in relationships hash>" Relations are struct fields that represent a one-to-one or one-to-many to other structs. jsonapi will traverse the graph of relationships and marshal or unmarshal records. The first argument must be, "relation", and the second should be the name of the relationship, used as the key in the "relationships" hash for the record. Use the methods below to Marshal and Unmarshal jsonapi.org json payloads. Visit the readme at https://github.com/google/jsonapi
Package dockertest contains helper functions for setting up and tearing down docker containers to aid in testing. dockertest supports spinning up MySQL, PostgreSQL and MongoDB out of the box. Dockertest provides two environment variables
Package null contains SQL types that consider zero input and null input as separate values, with convenient support for JSON and text marshaling. Types in this package will always encode to their null value if null. Use the zero subpackage if you want zero values and null to be treated the same.
Package sshchat is an implementation of an ssh server which serves a chat room instead of a shell. sshd subdirectory contains the ssh-related pieces which know nothing about chat. chat subdirectory contains the chat-related pieces which know nothing about ssh. The Host type is the glue between the sshd and chat pieces.
Package semver provides the ability to work with Semantic Versions (http://semver.org) in Go. Specifically it provides the ability to: There are two functions that can parse semantic versions. The `StrictNewVersion` function only parses valid version 2 semantic versions as outlined in the specification. The `NewVersion` function attempts to coerce a version into a semantic version and parse it. For example, if there is a leading v or a version listed without all 3 parts (e.g. 1.2) it will attempt to coerce it into a valid semantic version (e.g., 1.2.0). In both cases a `Version` object is returned that can be sorted, compared, and used in constraints. When parsing a version an optional error can be returned if there is an issue parsing the version. For example, The version object has methods to get the parts of the version, compare it to other versions, convert the version back into a string, and get the original string. For more details please see the documentation at https://godoc.org/github.com/Masterminds/semver. A set of versions can be sorted using the `sort` package from the standard library. For example, There are two methods for comparing versions. One uses comparison methods on `Version` instances and the other is using Constraints. There are some important differences to notes between these two methods of comparison. There are differences between the two methods or checking versions because the comparison methods on `Version` follow the specification while comparison ranges are not part of the specification. Different packages and tools have taken it upon themselves to come up with range rules. This has resulted in differences. For example, npm/js and Cargo/Rust follow similar patterns which PHP has a different pattern for ^. The comparison features in this package follow the npm/js and Cargo/Rust lead because applications using it have followed similar patters with their versions. Checking a version against version constraints is one of the most featureful parts of the package. There are two elements to the comparisons. First, a comparison string is a list of comma or space separated AND comparisons. These are then separated by || (OR) comparisons. For example, `">= 1.2 < 3.0.0 || >= 4.2.3"` is looking for a comparison that's greater than or equal to 1.2 and less than 3.0.0 or is greater than or equal to 4.2.3. This can also be written as `">= 1.2, < 3.0.0 || >= 4.2.3"` The basic comparisons are: There are multiple methods to handle ranges and the first is hyphens ranges. These look like: The `x`, `X`, and `*` characters can be used as a wildcard character. This works for all comparison operators. When used on the `=` operator it falls back to the tilde operation. For example, Tilde Range Comparisons (Patch) The tilde (`~`) comparison operator is for patch level ranges when a minor version is specified and major level changes when the minor number is missing. For example, Caret Range Comparisons (Major) The caret (`^`) comparison operator is for major level changes once a stable (1.0.0) release has occurred. Prior to a 1.0.0 release the minor versions acts as the API stability level. This is useful when comparisons of API versions as a major change is API breaking. For example, In addition to testing a version against a constraint, a version can be validated against a constraint. When validation fails a slice of errors containing why a version didn't meet the constraint is returned. For example,