Package clientgo is the official Go client for the Kubernetes API. It provides a standard set of clients and tools for building applications, controllers, and operators that communicate with a Kubernetes cluster. kubernetes: Contains the typed Clientset for interacting with built-in, versioned API objects (e.g., Pods, Deployments). This is the most common starting point. dynamic: Provides a dynamic client that can perform operations on any Kubernetes object, including Custom Resources (CRs). It is essential for building controllers that work with CRDs. discovery: Used to discover the API groups, versions, and resources supported by a Kubernetes cluster. tools/cache: The foundation of the controller pattern. This package provides efficient caching and synchronization mechanisms (Informers and Listers) for building controllers. tools/clientcmd: Provides methods for loading client configuration from kubeconfig files. This is essential for out-of-cluster applications and CLI tools. rest: Provides a lower-level RESTClient that manages the details of communicating with the Kubernetes API server. It is useful for advanced use cases that require fine-grained control over requests, such as working with non-standard REST verbs. There are two primary ways to configure a client to connect to the API server: In-Cluster Configuration: For applications running inside a Kubernetes pod, the `rest.InClusterConfig()` function provides a straightforward way to configure the client. It automatically uses the pod's service account for authentication and is the recommended approach for controllers and operators. Out-of-Cluster Configuration: For local development or command-line tools, the `clientcmd` package is used to load configuration from a kubeconfig file. The `rest.Config` object allows for fine-grained control over client-side performance and reliability. Key settings include: Once configured, a client can be used to interact with objects in the cluster. The Typed Clientset (`kubernetes` package) provides a strongly typed interface for working with built-in Kubernetes objects. The Dynamic Client (`dynamic` package) can work with any object, including Custom Resources, using `unstructured.Unstructured` types. For Custom Resources (CRDs), the `k8s.io/code-generator` repository contains the tools to generate typed clients, informers, and listers. The `sample-controller` is the canonical example of this pattern. Server-Side Apply is a patching strategy that allows multiple actors to share management of an object by tracking field ownership. This prevents actors from inadvertently overwriting each other's changes and provides a mechanism for resolving conflicts. The `applyconfigurations` package provides the necessary tools for this declarative approach. Robust error handling is essential when interacting with the API. The `k8s.io/apimachinery/pkg/api/errors` package provides functions to inspect errors and check for common conditions, such as whether a resource was not found or already exists. This allows controllers to implement robust, idempotent reconciliation logic. The controller pattern is central to Kubernetes. A controller observes the state of the cluster and works to bring it to the desired state. The `tools/cache` package provides the building blocks for this pattern. Informers watch the API server and maintain a local cache, Listers provide read-only access to the cache, and Workqueues decouple event detection from processing. In a high-availability deployment where multiple instances of a controller are running, leader election (`tools/leaderelection`) is used to ensure that only one instance is active at a time. Client-side feature gates allow for enabling or disabling experimental features in `client-go`. They can be configured via the `rest.Config` object.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. See the credentials package documentation for more information on credential providers included with the SDK, and how to customize the SDK's usage of credentials. https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. See the endpoints package for the AWS Regions and Endpoints metadata. https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/ In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response. Deprecated: aws-sdk-go is deprecated. Use aws-sdk-go-v2. See https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-aws-sdk-for-go-v1-on-july-31-2025/.
Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation. To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases. The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the SyncProducer can still sometimes be lost. To consume messages, use Consumer or Consumer-Group API. For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire; the Client provides higher-level metadata management that is shared between the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry. Broker related metrics: Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics. Producer related metrics: Consumer related metrics:
Package cloud is the root of the packages used to access Google Cloud Services. See https://pkg.go.dev/cloud.google.com/go#section-directories for a full list of sub-modules. All clients in sub-packages are configurable via client options. These options are described here: https://pkg.go.dev/google.golang.org/api/option. Endpoint configuration is used to specify the URL to which requests are sent. It is used for services that support or require regional endpoints, as well as for other use cases such as testing against fake servers. For example, the Vertex AI service recommends that you configure the endpoint to the location with the features you want that is closest to your physical location or the location of your users. There is no global endpoint for Vertex AI. See Vertex AI - Locations for more details. The following example demonstrates configuring a Vertex AI client with a regional endpoint: All of the clients support authentication via Google Application Default Credentials, or by providing a JSON key file for a Service Account. See examples below. Google Application Default Credentials (ADC) is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://cloud.google.com/docs/authentication/production. If you have your environment configured correctly you will not need to pass any extra information to the client libraries. Here is an example of a client using ADC to authenticate: You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.cloud.google.com/iam-admin/serviceaccounts. This example uses the Secret Manger client, but the same steps apply to the all other client libraries this package as well. Example: In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. This example uses the Secret Manager client, but the same steps apply to all other client libraries as well. Note that scopes can be found at https://developers.google.com/identity/protocols/oauth2/scopes, and are also provided in all auto-generated libraries: for example, cloud.google.com/go/secretmanager/apiv1 provides DefaultAuthScopes. Example: By default, non-streaming methods, like Create or Get, will have a default deadline applied to the context provided at call time, unless a context deadline is already set. Streaming methods have no default deadline and will run indefinitely. To set timeouts or arrange for cancellation, use context. Transient errors will be retried when correctness allows. Here is an example of setting a timeout for an RPC using context.WithTimeout: Here is an example of setting a timeout for an RPC using github.com/googleapis/gax-go/v2.WithTimeout: Here is an example of how to arrange for an RPC to be canceled, use context.WithCancel: Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Regardless of which transport is used, request headers can be set in the same way using [`callctx.SetHeaders`]setheaders. Here is a generic example: There are a some header keys that Google reserves for internal use that must not be ovewritten. The following header keys are broadly considered reserved and should not be conveyed by client library users unless instructed to do so: * `x-goog-api-client` * `x-goog-request-params` Be sure to check the individual package documentation for other service-specific reserved headers. For example, Storage supports a specific auditing header that is mentioned in that [module's documentation]storagedocs. Google Cloud services respect system parameterssystem parameters that can be used to augment request and/or response behavior. For the most part, they are not needed when using one of the enclosed client libraries. However, those that may be necessary are made available via the [`callctx`]callctx package. If not present there, consider opening an issue on that repo to request a new constant. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport by default. For gRPC clients, connection pooling is configurable. Users of Cloud Client Libraries may specify google.golang.org/api/option.WithGRPCConnectionPool as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and accessed in a round robin fashion. Minimal container images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/googleapis/google-cloud-go/issues/928 for more information. For tips on how to write tests against code that calls into our libraries check out our Debugging Guide. For tips on how to write tests against code that calls into our libraries check out our Testing Guide. Most of the errors returned by the generated clients are wrapped in an github.com/googleapis/gax-go/v2/apierror.APIError and can be further unwrapped into a google.golang.org/grpc/status.Status or google.golang.org/api/googleapi.Error depending on the transport used to make the call (gRPC or REST). Converting your errors to these types can be a useful way to get more information about what went wrong while debugging. APIError gives access to specific details in the error. The transport-specific errors can still be unwrapped using the APIError. Semver is used to communicate stability of the sub-modules of this package. Note, some stable sub-modules do contain packages, and sometimes features, that are considered unstable. If something is unstable it will be explicitly labeled as such. Example of package does in an unstable package: Clients that contain alpha and beta in their import path may change or go away without notice. Clients marked stable will maintain compatibility with future versions for as long as we can reasonably sustain. Incompatible changes might be made in some situations, including:
Package api is the root of the packages used to access Google Cloud Services. See https://godoc.org/google.golang.org/api for a full list of sub-packages. Within api there exist numerous clients which connect to Google APIs, and various utility packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in https://godoc.org/google.golang.org/api/transport for more details. Due to the auto-generated nature of this collection of libraries, complete APIs or specific versions can appear or go away without notice. As a result, you should always locally vendor any API(s) that your code relies upon. Google APIs follow semver as specified by https://cloud.google.com/apis/design/versioning. The code generator and the code it produces - the libraries in the google.golang.org/api/... subpackages - are beta. Note that versioning and stability is strictly not communicated through Go modules. Go modules are used only for dependency management. Many parameters are specified using ints. However, underlying APIs might operate on a finer granularity, expecting int64, int32, uint64, or uint32, all of whom have different maximum values. Subsequently, specifying an int parameter in one of these clients may result in an error from the API because the value is too large. To see the exact type of int that the API expects, you can inspect the API's discovery doc. A global catalogue pointing to the discovery doc of APIs can be found at https://www.googleapis.com/discovery/v1/apis. This field can be found on all Request/Response structs in the generated clients. All of these types have the JSON `omitempty` field tag present on their fields. This means if a type is set to its default value it will not be marshalled. Sometimes you may actually want to send a default value, for instance sending an int of `0`. In this case you can override the `omitempty` feature by adding the field name to the `ForceSendFields` slice. See docs on any struct for more details. This may be used to include empty fields in Patch requests. This field can be found on all Request/Response structs in the generated clients. It can be be used to send JSON null values for the listed fields. By default, fields with empty values are omitted from API requests because of the presence of the `omitempty` field tag on all fields. However, any field with an empty value appearing in NullFields will be sent to the server as null. It is an error if a field in this list has a non-empty value. This may be used to include null fields in Patch requests. An error returned by a client's Do method may be cast to a *googleapi.Error or unwrapped to an *apierror.APIError. The https://pkg.go.dev/google.golang.org/api/googleapi#Error type is useful for getting the HTTP status code: The https://pkg.go.dev/github.com/googleapis/gax-go/v2/apierror#APIError type is useful for inspecting structured details of the underlying API response, such as the reason for the error and the error domain, which is typically the registered service name of the tool or product that generated the error: If an API call returns an Operation, that means it could take some time to complete the work initiated by the API call. Applications that are interested in the end result of the operation they initiated should wait until the Operation.Done field indicates it is finished. To do this, use the service's Operation client, and a loop, like so:
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a Client: The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. You may configure the client by passing in options from the google.golang.org/api/option package. You may also use options defined in this package, such as WithJSONReads. If you only wish to access public data, you can create an unauthenticated client with To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual: Please note that there is no official emulator for Cloud Storage. A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use BucketHandle.Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with ObjectHandle.Attrs: Listing objects in a bucket is done with the BucketHandle.Objects method: Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used: If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs. To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, use BucketHandle.SignedURL. You can also sign a URL without creating a client. See the documentation of SignedURL for details. A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user. For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4. If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL and BucketHandle.GenerateSignedPostPolicyV4 if any of the following are true: Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient. In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account. To generate the signature, you must have: Errors returned by this client are often of the type github.com/googleapis/gax-go/v2/apierror. The [apierror.APIError] type can wrap a google.golang.org/grpc/status.Status if gRPC was used, or a google.golang.org/api/googleapi.Error if HTTP/REST was used. You might also encounter googleapi.Error directly from HTTP operations. These types of errors can be inspected for more information by using errors.As to access the specific underlying error types and retrieve detailed information, including HTTP or gRPC status codes. For example: This library may also return other errors that are not wrapped as [apierror.APIError]. For example, errors with authentication may return cloud.google.com/go/auth.Error. Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation. The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docs for more information. Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example: You can add custom headers to any API call made by this package by using callctx.SetHeaders on the context which is passed to the method. For example, to add a custom audit logging header: This package includes support for the Cloud Storage gRPC API. This implementation uses gRPC rather than the default JSON & XML APIs to make requests to Cloud Storage. All methods on the Client support the gRPC API, with the exception of the Client.ServiceAccount, Notification, and HMACKey methods. The Cloud Storage gRPC API is generally available. To create a client which will use gRPC, use the alternate constructor: One major advantage of the gRPC API is that it can use Direct Connectivity, enabling requests to skip some proxy steps and reducing response latency. Requirements to use Direct Connectivity include: Additional requirements for Direct Connectivity are documented in the Cloud Storage gRPC docs. If all requirements are met, the client will use Direct Connectivity by default without requiring any client options or environment variables. To disable Direct Connectivity, you can set the environment variable GOOGLE_CLOUD_DISABLE_DIRECT_PATH=true. Dependencies for the gRPC API may slightly increase the size of binaries for applications depending on this package. If you are not using gRPC, you can use the build tag `disable_grpc_modules` to opt out of these dependencies and reduce the binary size. The gRPC client is instrumented with Open Telemetry metrics which export to Cloud Monitoring by default. More information is available in the gRPC client-side metrics documentation, including information about roles which must be enabled in order to do the export successfully. To disable this export, you can use the WithDisabledClientMetrics client option. The client automatically computes and sends CRC32C checksums for uploads using Writer, providing an additional layer of data integrity validation with a slight CPU overhead. Note: With a chunk size of 0 (no buffering) in JSON uploads, an auto-calculated checksum mismatch returns an error but may leave corrupt data on the server, requiring manual cleanup. This risk does not apply to single-shot uploads when user-provided checksum is provided. Automatic checksumming can be disabled using [Writer.DisableAutoChecksum]. The parallel upload feature splits a large object into multiple parts and uploads them in parallel. It is supported exclusively for gRPC clients. If used with a JSON client, the configuration is ignored and a standard upload is performed. Parallel uploads can yield higher throughput when uploading large objects. However, there are several things which must be kept in mind when choosing to use this strategy: **Note:** This feature is currently experimental and its API surface may change in future releases. It is not yet recommended for production use. Certain control plane and long-running operations for Cloud Storage (including Folder and Managed Folder operations) are supported via the autogenerated Storage Control client, which is available as a subpackage in this module. See package docs at cloud.google.com/go/storage/control/apiv2 or reference the Storage Control API docs.
Package pgx is a PostgreSQL database driver. pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jackc/pgx/stdlib to use pgx as a database/sql compatible driver. pgx implements Query and Scan in the familiar database/sql style. pgx also implements QueryRow in the same style as database/sql. Use Exec to execute a query that does not return a result set. Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. The connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. It delegates methods such as QueryRow to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control. pgx maps between all common base types directly between Go and PostgreSQL. In particular: pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer. pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly map to native Go types. pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB. pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6. pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgx/pgtype library to support more types. See documention for that library for instructions on how to implement custom types. See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type. pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces. If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attempt to encode the underlying type. While this is usually desired behavior it can produce suprising behavior if one the underlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. It is recommended that this situation be avoided by implementing pgx interfaces on the renamed type. []byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. Transactions are started by calling Begin or BeginEx. The BeginEx variant can create a transaction with a specified isolation level. Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory. CopyFrom can be faster than an insert with as few as 5 rows. pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification. The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection. pgx has never explicitly supported Postgres < 9.6's `ssl_renegotiation` option. As of v3.3.0, it doesn't send `ssl_renegotiation: 0` either to support Redshift (https://github.com/jackc/pgx/pull/476). If you need TLS Renegotiation, consider supplying `ConnConfig.TLSConfig` with a non-zero `Renegotiation` value and if it's not the default on your server, set `ssl_renegotiation` via `ConnConfig.RuntimeParams`. pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, and the testing log are provided in the log directory.
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/main/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies, like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/event) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can be used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://pkg.go.dev/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. This example creates a simple application Controller that is configured for ExampleCRDWithConfigMapRef CRD. Any change in the configMap referenced in this Custom Resource will cause the re-reconcile of the parent ExampleCRDWithConfigMapRef due to the implementation of the .Watches method of "sigs.k8s.io/controller-runtime/pkg/builder".Builder. This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/component-base/blob/master/config/v1alpha1/defaults.go * defaultLeaseDuration = 15 * time.Second * defaultRenewDeadline = 10 * time.Second * defaultRetryPeriod = 2 * time.Second * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application.
Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation. To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases. The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the SyncProducer can still sometimes be lost. To consume messages, use the Consumer. Note that Sarama's Consumer implementation does not currently support automatic consumer-group rebalancing and offset tracking. For Zookeeper-based tracking (Kafka 0.8.2 and earlier), the https://github.com/wvanbergen/kafka library builds on Sarama to add this support. For Kafka-based tracking (Kafka 0.9 and later), the https://github.com/bsm/sarama-cluster library builds on Sarama to add this support. For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire; the Client provides higher-level metadata management that is shared between the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry. Broker related metrics: Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics. Producer related metrics:
Package gocui allows to create console user interfaces. Create a new GUI: Set GUI managers: Managers are in charge of GUI's layout and can be used to build widgets. On each iteration of the GUI's main loop, the Layout function of each configured manager is executed. Managers are used to set-up and update the application's main views, being possible to freely change them during execution. Also, it is important to mention that a main loop iteration is executed on each reported event (key-press, mouse event, window resize, etc). GUIs are composed by Views, you can think of it as buffers. Views implement the io.ReadWriter interface, so you can just write to them if you want to modify their content. The same is valid for reading. Create and initialize a view with absolute coordinates: Views can also be created using relative coordinates: Configure keybindings: gocui implements full mouse support that can be enabled with: Mouse events are handled like any other keybinding: IMPORTANT: Views can only be created, destroyed or updated in three ways: from the Layout function within managers, from keybinding callbacks or via *Gui.Update(). The reason for this is that it allows gocui to be concurrent-safe. So, if you want to update your GUI from a goroutine, you must use *Gui.Update(). For example: By default, gocui provides a basic edition mode. This mode can be extended and customized creating a new Editor and assigning it to *View.Editor: DefaultEditor can be taken as example to create your own custom Editor: Colored text: Views allow to add colored text using ANSI colors. For example: For more information, see the examples in folder "_examples/".
Package dynamodb provides the API client, operations, and parameter types for Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability.
Package ecs provides the API client, operations, and parameter types for Amazon EC2 Container Service. Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service. It makes it easy to run, stop, and manage Docker containers. You can host your cluster on a serverless infrastructure that's managed by Amazon ECS by launching your services or tasks on Fargate. For more control, you can host your tasks on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) or External (on-premises) instances that you manage. Amazon ECS makes it easy to launch and stop container-based applications with simple API calls. This makes it easy to get the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features. You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements. With Amazon ECS, you don't need to operate your own cluster management and configuration management systems. You also don't need to worry about scaling your management infrastructure.
Package cloudformation provides the API client, operations, and parameter types for AWS CloudFormation. CloudFormation allows you to create and manage Amazon Web Services infrastructure deployments predictably and repeatedly. You can use CloudFormation to leverage Amazon Web Services products, such as Amazon Elastic Compute Cloud, Amazon Elastic Block Store, Amazon Simple Notification Service, ELB, and Amazon EC2 Auto Scaling to build highly reliable, highly scalable, cost-effective applications without creating or configuring the underlying Amazon Web Services infrastructure. With CloudFormation, you declare all your resources and dependencies in a template file. The template defines a collection of resources as a single unit called a stack. CloudFormation creates and deletes all member resources of the stack together and manages all dependencies between the resources for you. For more information about CloudFormation, see the CloudFormation product page. CloudFormation makes use of other Amazon Web Services products. If you need additional technical information about a specific Amazon Web Services product, you can find the product's technical documentation at docs.aws.amazon.com.
Package elasticsearchservice provides the API client, operations, and parameter types for Amazon Elasticsearch Service. Use the Amazon Elasticsearch Configuration API to create, configure, and manage Elasticsearch domains. For sample code that uses the Configuration API, see the Amazon Elasticsearch Service Developer Guide. The guide also contains sample code for sending signed HTTP requests to the Elasticsearch APIs. The endpoint for configuration service requests is region-specific: es.region.amazonaws.com. For example, es.us-east-1.amazonaws.com. For a current list of supported regions and endpoints, see Regions and Endpoints.
Package securityhub provides the API client, operations, and parameter types for AWS SecurityHub. Security Hub CSPM provides you with a comprehensive view of your security state in Amazon Web Services and helps you assess your Amazon Web Services environment against security industry standards and best practices. Security Hub CSPM collects security data across Amazon Web Services accounts, Amazon Web Services services, and supported third-party products and helps you analyze your security trends and identify the highest priority security issues. To help you manage the security state of your organization, Security Hub CSPM supports multiple security standards. These include the Amazon Web Services Foundational Security Best Practices (FSBP) standard developed by Amazon Web Services, and external compliance frameworks such as the Center for Internet Security (CIS), the Payment Card Industry Data Security Standard (PCI DSS), and the National Institute of Standards and Technology (NIST). Each standard includes several security controls, each of which represents a security best practice. Security Hub CSPM runs checks against security controls and generates control findings to help you assess your compliance against security best practices. In addition to generating control findings, Security Hub CSPM also receives findings from other Amazon Web Services services, such as Amazon GuardDuty and Amazon Inspector, and supported third-party products. This gives you a single pane of glass into a variety of security-related issues. You can also send Security Hub CSPM findings to other Amazon Web Services services and supported third-party products. Security Hub CSPM offers automation features that help you triage and remediate security issues. For example, you can use automation rules to automatically update critical findings when a security check fails. You can also leverage the integration with Amazon EventBridge to trigger automatic responses to specific findings. This guide, the Security Hub CSPM API Reference, provides information about the Security Hub CSPM API. This includes supported resources, HTTP methods, parameters, and schemas. If you're new to Security Hub CSPM, you might find it helpful to also review the Security Hub CSPM User Guide. The user guide explains key concepts and provides procedures that demonstrate how to use Security Hub CSPM features. It also provides information about topics such as integrating Security Hub CSPM with other Amazon Web Services services. In addition to interacting with Security Hub CSPM by making calls to the Security Hub CSPM API, you can use a current version of an Amazon Web Services command line tool or SDK. Amazon Web Services provides tools and SDKs that consist of libraries and sample code for various languages and platforms, such as PowerShell, Java, Go, Python, C++, and .NET. These tools and SDKs provide convenient, programmatic access to Security Hub CSPM and other Amazon Web Services services . They also handle tasks such as signing requests, managing errors, and retrying requests automatically. For information about installing and using the Amazon Web Services tools and SDKs, see Tools to Build on Amazon Web Services. With the exception of operations that are related to central configuration, Security Hub CSPM API requests are executed only in the Amazon Web Services Region that is currently active or in the specific Amazon Web Services Region that you specify in your request. Any configuration or settings change that results from the operation is applied only to that Region. To make the same change in other Regions, call the same API operation in each Region in which you want to apply the change. When you use central configuration, API requests for enabling Security Hub CSPM, standards, and controls are executed in the home Region and all linked Regions. For a list of central configuration operations, see the Central configuration terms and conceptssection of the Security Hub CSPM User Guide. The following throttling limits apply to Security Hub CSPM API operations. BatchEnableStandards - RateLimit of 1 request per second. BurstLimit of 1 request per second. GetFindings - RateLimit of 3 requests per second. BurstLimit of 6 requests per second. BatchImportFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. BatchUpdateFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. UpdateStandardsControl - RateLimit of 1 request per second. BurstLimit of 5 requests per second. All other operations - RateLimit of 10 requests per second. BurstLimit of 30 requests per second.
Package cognitoidentityprovider provides the API client, operations, and parameter types for Amazon Cognito Identity Provider. With the Amazon Cognito user pools API, you can configure user pools and authenticate users. To authenticate users from third-party identity providers (IdPs) in this API, you can link IdP users to native user profiles. Learn more about the authentication and authorization of federated users at Adding user pool sign-in through a third partyand in the User pool federation endpoints and hosted UI reference. This API reference provides detailed information about API operations and object types in Amazon Cognito. Along with resource management operations, the Amazon Cognito user pools API includes classes of operations and authorization models for client-side and server-side authentication of users. You can interact with operations in the Amazon Cognito user pools API as any of the following subjects. An administrator who wants to configure user pools, app clients, users, groups, or other user pool functions. A server-side app, like a web application, that wants to use its Amazon Web Services privileges to manage, authenticate, or authorize a user. A client-side app, like a mobile app, that wants to make unauthenticated requests to manage, authenticate, or authorize a user. For more information, see Using the Amazon Cognito user pools API and user pool endpoints in the Amazon Cognito Developer Guide. With your Amazon Web Services SDK, you can build the logic to support operational flows in every use case for this API. You can also make direct REST API requests to Amazon Cognito user pools service endpoints. The following links can get you started with the CognitoIdentityProvider client in other supported Amazon Web Services SDKs. Amazon Web Services Command Line Interface Amazon Web Services SDK for .NET Amazon Web Services SDK for C++ Amazon Web Services SDK for Go Amazon Web Services SDK for Java V2 Amazon Web Services SDK for JavaScript Amazon Web Services SDK for PHP V3 Amazon Web Services SDK for Python Amazon Web Services SDK for Ruby V3 To get started with an Amazon Web Services SDK, see Tools to Build on Amazon Web Services. For example actions and scenarios, see Code examples for Amazon Cognito Identity Provider using Amazon Web Services SDKs.
Package organizations provides the API client, operations, and parameter types for AWS Organizations. Organizations is a web service that enables you to consolidate your multiple Amazon Web Services accounts into an organization and centrally manage your accounts and their resources. This guide provides descriptions of the Organizations operations. For more information about using this service, see the Organizations User Guide. We welcome your feedback. Send your comments to feedback-awsorganizations@amazon.com or post your feedback and questions in the Organizations support forum. For more information about the Amazon Web Services support forums, see Forums Help. For the current release of Organizations, specify the us-east-1 region for all Amazon Web Services API and CLI calls made from the commercial Amazon Web Services Regions outside of China. If calling from one of the Amazon Web Services Regions in China, then specify cn-northwest-1 . You can do this in the CLI by using these parameters and commands: --endpoint-url https://organizations.us-east-1.amazonaws.com (from commercial or --endpoint-url https://organizations.cn-northwest-1.amazonaws.com.cn (from aws configure set default.region us-east-1 (from commercial Amazon Web Services or aws configure set default.region cn-northwest-1 (from Amazon Web Services --region us-east-1 (from commercial Amazon Web Services Regions outside of or --region cn-northwest-1 (from Amazon Web Services Regions in China) Organizations supports CloudTrail, a service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. By using information collected by CloudTrail, you can determine which requests the Organizations service received, who made the request and when, and so on. For more about Organizations and its support for CloudTrail, see Logging Organizations API calls with CloudTrailin the Organizations User Guide. To learn more about CloudTrail, including how to turn it on and find your log files, see the CloudTrail User Guide.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, and PushGetHeadersMsg functions are provided as a convenience. While it is of course possible to create and send these messages manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Finally, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package cogito provides LLM-powered reasoning chains for Go. cogito implements a Thought-Note architecture for building autonomous systems that reason and adapt. The package is built around two core concepts: Use New or NewWithTrace to create thoughts: Cogito provides a comprehensive set of reasoning primitives: Decision & Analysis: Control Flow: Reflection: Session Management: Synthesis: Cogito wraps pipz connectors for Thought processing: LLM access uses a resolution hierarchy: Use SetProvider to configure the global default: Cogito emits capitan signals throughout execution. See [signals.go] for the complete list of events including ThoughtCreated, StepStarted, StepCompleted, StepFailed, NoteAdded, and NotesPublished.
Package zkit provides default-safe assembly helpers for building net/http services with an operator-friendly admin surface. The main entry points are: You can start using zkit by only calling NewDefaultService and then writing your business handlers as usual. When you need more control, zkit exposes lower-level building blocks in subpackages (listed below). This example mounts admin under "/-/" on the same server: With the default admin kit, the always-on read endpoints include: zkit's admin endpoints are designed to be default-safe: Reads are explicit and guarded: AdminSpec.ReadGuard is required and protects all read endpoints. A nil ReadGuard is treated as an assembly error and will panic (fail-fast). Writes are off by default: AdminSpec.WriteGuard == nil disables all write endpoints. If WriteGuard is non-nil, enable write groups explicitly: EnableLogLevelSet for /log/level/set; TuningWritesEnabled / TaskWritesEnabled for tuning and task writes. Allowlist (empty = deny-all) applies for tuning/task writes. /provided (custom diagnostic snapshot) is disabled by default because it is typically more sensitive; enable it explicitly by setting AdminSpec.ProvidedItems to a non-nil map. Tip: it is common to use separate guards for reads and writes (e.g. a read token and a stronger write token). NewDefaultService supports two admin modes: Mount: route requests to admin first when URL path matches the reserved prefix. Prefix defaults to "/-/". If a request hits the base path without the trailing slash (e.g. "/-"), it is redirected to the normalized prefix (e.g. "/-/") with HTTP 307. Mount requires ServiceSpec.Primary (otherwise it panics). Standalone: run admin on its own managed server (usually a dedicated port). This is required for "worker-only" processes with no primary server. Service provides Start/Wait/Shutdown/Run: Signal handling in Run: HTTP server defaults (only when you use Addr+Handler and let zkit build *http.Server): If you provide HTTPServerSpec.Server, zkit does not override your timeouts/BaseContext/ErrorLog/etc. NewDefaultService can host these optional components and (optionally) expose them to admin: Tasks: TasksManager!=nil = enabled and started; TasksExposeToAdmin = wire into admin when admin enabled. Tuning: Tuning != nil = enabled; TuningExposeToAdmin = wire into admin when admin enabled. Log: LogLevelVar!=nil = enabled; LogExposeToAdmin = wire /log/level (read) and optionally /log/level/set (write). Note: if admin config (especially write groups) requires a component (e.g. task writes require a task manager), NewDefaultService may create the component to satisfy assembly, but write capabilities still require explicit enabling via AdminSpec.WriteGuard and the enable flags (EnableLogLevelSet, TuningWritesEnabled, TaskWritesEnabled) plus allowlists. ServiceSpec (NewDefaultService): SignalsDisable, Signals, ShutdownTimeout, Primary, Extra, Admin (*AdminSpec), AdminMountPrefix, AdminStandaloneServer, TasksManager, TasksExposeToAdmin, Tuning, TuningExposeToAdmin, LogLevelVar, LogExposeToAdmin, OnStart, OnShutdown, OnServeError. AdminSpec (Admin field / NewDefaultAdmin): ReadGuard (required), TrustedProxies, TrustedHeaders, ReadyChecks, LogLevelVar, Tuning, TaskManager, TuningReadAllowPrefixes/Keys/Func, TaskReadAllowPrefixes/Names/Func, ProvidedItems, ProvidedMaxBytes, WriteGuard, EnableLogLevelSet, TuningWritesEnabled, TuningWriteAllowPrefixes/Keys/Func, TaskWritesEnabled, TaskWriteAllowPrefixes/Names/Func. HTTPServerSpec (Primary, Extra, AdminStandaloneServer): Name, Critical, Server (or Addr+Handler). Package zkit re-exports Guard types and constructors from the admin subpackage so callers can configure admin without importing admin.
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/pact-foundation/pact-go/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/pact-foundation/pact-ruby/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: The `VerifyProvider` will handle all verifications, treating them as subtests and giving you granular test reporting. If you don't like this behaviour, you may call `VerifyProviderRaw` directly and handle the errors manually. Note that `PactURLs` may be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires running an API endpoint to configure any [provider states](http://docs.pact.io/documentation/provider_states.html) during the verification process. The option you must provide to the dsl.VerifyRequest is: An example route using the standard Go http package might look like this: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so:
Package xdb provides functionality for creating and managing database connections using the GORM library. It supports multiple database types including MySQL, PostgreSQL, SQLite, SQL Server, and ClickHouse. It uses the functional options pattern for flexible configuration of database connections with customizable connection pool settings.
Package forge provides a simple, opinionated framework for building B2B micro-SaaS applications in Go. Forge is designed around the principle of "no magic" — it uses explicit, readable code with no reflection or service containers. The framework provides a thin orchestration layer while keeping business logic in plain Go handlers. Create a new application with New(), configure it with options, and pass it to Run(): The Context interface embeds context.Context, so it can be passed directly to any function that expects a standard library context: Context provides convenience methods for checking the current user. These are shortcuts over the session system and return safe defaults when no session is configured: Enable server-side session management with WithSession. Sessions auto-create on first access via SessionGet or SessionSet: Use SessionGet and SessionSet for type-safe access: Configure permissions with WithRoles. The role extractor is called lazily on the first [Context.Can] call and cached for the request: Check permissions in handlers: Generic helper functions provide type-safe access to URL and query parameters. They use strconv for conversion and return zero values on parse failure: Supported types: ~string, ~int, ~int64, ~float64, ~bool. Handlers implement the Handler interface to declare routes: Middleware wraps handlers to add cross-cutting concerns: Add middleware globally with WithMiddleware: The middlewares package provides common functionality: Enable background job processing with River integration: Define tasks using structural typing: Enqueue jobs from handlers: Enable S3-compatible file storage: Upload and download files from handlers: Stream events to clients using the channel-based SSE API. The framework handles headers, keepalive, and flushing: For applications that need host-based routing, compose multiple Apps with Run(): Return HTTPError from handlers to set the status code and error details: Customize error response handling with WithErrorHandler: The application handles SIGINT/SIGTERM for graceful shutdown. Register cleanup functions with WithShutdownHook: Load configuration from environment variables with LoadConfig: For testing, use httptest.NewServer with the app:
Package workers provides background job management with periodic execution, scheduling, retry logic, and graceful shutdown. Workers is designed for running background tasks like data processing, cleanup jobs, API synchronization, and scheduled reports. Create a WorkerManager, register workers with their jobs, and start: Jobs are functions that receive a context: Configure workers using functional options: Run jobs at specific times instead of intervals: Automatic retries with configurable backoff strategies: Available strategies: ConstantBackoff, LinearBackoff, ExponentialBackoff. Custom strategies can be defined as: func(time.Duration, int) time.Duration The library handles SIGINT and SIGTERM signals, allowing running jobs to complete before shutdown.
Package client provides a high-performance HTTP/1.1 client with full operation support. Package client provides Gzip compression with pooled writers for efficiency. Package client provides highly configurable HTTP client settings. Package client implements connection management with HTTP/1.1 pipelining support. Package client provides efficient TCP connection dialing with socket tuning. Package client provides error handling with detailed logging. Package client provides efficient HTTP/1.1 response parsing. Package client provides efficient connection pooling with direct access. Package client provides full HTTP/HTTPS proxy support with CONNECT method. Package client provides HTTP request handling with memory-efficient buffers. Package client provides HTTP response handling with pre-allocated buffers. Package client provides deep TLS support with session caching and connection reuse. Package client provides version information for Phantom Client. Package client provides low-level HTTP request encoding without allocations.
Package sourcecraft provides a Go client library for the SourceCraft API. SourceCraft is a comprehensive platform for software development lifecycle management, providing code repository management, CI/CD pipelines, error tracking, and more. The Client type provides methods to interact with the SourceCraft REST API. Create a client using NewClient with your API endpoint and authentication token: The package includes comprehensive webhook parsing and verification capabilities. Create a webhook parser to handle incoming webhook events: Parse incoming webhook requests and handle events: The webhook parser automatically verifies HMAC signatures using SHA-256 when a secret is configured, ensuring webhook authenticity. All Client methods are safe for concurrent use. The Client type uses internal synchronization to protect shared state.
Package connect provides the client and types for making API requests to Amazon Connect Service. The Amazon Connect API Reference provides descriptions, syntax, and usage examples for each of the Amazon Connect actions, data types, parameters, and errors. Amazon Connect is a cloud-based contact center solution that makes it easy to set up and manage a customer contact center and provide reliable customer engagement at any scale. See https://docs.aws.amazon.com/goto/WebAPI/connect-2017-08-08 for more information on this service. See connect package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/connect/ To Amazon Connect Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Connect Service client Connect for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/connect/#New
Package claude provides a Go SDK for automating Claude Code CLI with concurrent session management, real-time streaming, custom tools, and hooks. This is the Go equivalent of the Python Claude Agent SDK. This library enables Go applications to interact with Claude Code CLI programmatically, supporting both single queries and multi-turn conversations with proper session management, file operations, and concurrent execution. For one-shot queries without session management: Define custom tools that run directly in your Go application: Intercept and control tool execution with hooks: Claude Code's --output-format stream-json produces JSONL where assistant and user messages wrap content in a nested "message" key: ParseMessage handles both this nested format and flat content automatically. The SDK provides typed messages for parsing CLI JSON output: The AgentOptions struct provides extensive configuration matching the Python SDK's ClaudeAgentOptions: The SDK provides typed errors for precise error handling: Error types include: The SDK automatically finds the Claude CLI in this order: The library is designed to be thread-safe: The library uses Claude CLI's existing authentication. Ensure Claude CLI is properly authenticated before using this library: Or set environment variables:
Package polarion provides a Go client for the Polarion REST API. It supports work item CRUD operations, querying with pagination, sparse field selection, automatic batching, and retry logic. Package polarion provides a Go client for the Polarion REST API. The client supports comprehensive work item management, project operations, user and group management, enumerations, and more. It features automatic retry logic with exponential backoff, sparse field selection, pagination, and batch operations. Create a client and start working with Polarion: The client provides the following features: Work with work items using the WorkItemService: Select only the fields you need to reduce response size: Query results are automatically paginated: Configure automatic retry behavior: The client provides typed errors for better error handling: For more examples, see the examples directory: For detailed API documentation, visit https://pkg.go.dev/github.com/almnorth/go-polarion This project is licensed under the Apache License 2.0. See the LICENSE file for details. Example demonstrates basic usage of the Polarion client. Example_createWorkItem demonstrates creating a work item. Example_enumerations demonstrates working with enumerations. Example_errorHandling demonstrates error handling. Example_queryWorkItems demonstrates querying work items. Example_updateWorkItem demonstrates updating a work item. Example_userGroups demonstrates working with user groups. Example_users demonstrates working with users. Example_workItemComments demonstrates working with work item comments. Example_workItemLinks demonstrates working with work item links. Example_workItemTypes demonstrates working with work item type definitions.
Package gocui allows to create console user interfaces. Create a new GUI: Set GUI managers: Managers are in charge of GUI's layout and can be used to build widgets. On each iteration of the GUI's main loop, the Layout function of each configured manager is executed. Managers are used to set-up and update the application's main views, being possible to freely change them during execution. Also, it is important to mention that a main loop iteration is executed on each reported event (key-press, mouse event, window resize, etc). GUIs are composed by Views, you can think of it as buffers. Views implement the io.ReadWriter interface, so you can just write to them if you want to modify their content. The same is valid for reading. Create and initialize a view with absolute coordinates: Views can also be created using relative coordinates: Configure keybindings: gocui implements full mouse support that can be enabled with: Mouse events are handled like any other keybinding: IMPORTANT: Views can only be created, destroyed or updated in three ways: from the Layout function within managers, from keybinding callbacks or via *Gui.Update(). The reason for this is that it allows gocui to be concurrent-safe. So, if you want to update your GUI from a goroutine, you must use *Gui.Update(). For example: By default, gocui provides a basic edition mode. This mode can be extended and customized creating a new Editor and assigning it to *View.Editor: DefaultEditor can be taken as example to create your own custom Editor: Colored text: Views allow to add colored text using ANSI colors. For example: For more information, see the examples in folder "_examples/".
Package artifact is an SDK for managing Flux artifacts. This package provides comprehensive functionality for packaging artifacts, storing, serving, and processing in Flux source-controller. It includes: Configuration Management (config pkg): Artifact File Server (server pkg): Digest Computation (digest pkg): Storage Management (storage pkg): The SDK standardizes artifact handling in Flux source-controller and provides a foundation for building 3rd party controllers that manage ExternalArtifacts.
Package lifecycle provides a robust control plane for modern Go applications. It unifies Death Management (Signals, Shutdown) with Life Management (Supervision, Events). The library is built around three pillars: Signal Context (The Foundation): Manages the application's lifecycle, handling OS signals (SIGINT/SIGTERM) and propagating cancellation via context.Context. ctx := lifecycle.NewSignalContext(context.Background()) <-ctx.Done() Supervisor (The Bridge): Manages a tree of Workers (Processes, Containers, Goroutines), ensuring they adhere to the parent's lifecycle. It supports restart strategies (OneForOne, OneForAll), persistent identity (Handover), and state introspection via Workers(). sup := lifecycle.NewSupervisor("root", lifecycle.SupervisorStrategyOneForOne) sup.Add(spec) states := sup.Workers() Control Plane (The Orchestrator): Decouples "Events" (Triggers) from "Handlers" (Reactions) via a Router. This allows the application to react to external stimuli (Input, Webhooks) dynamically. router := lifecycle.NewRouter() router.Handle("signal/interrupt", handler) For convenience, this package exposes the most commonly used types and constructors from the internal `pkg/` structure, grouped by functionality: For CLI tools, use the Interactive Router preset to wire up signals and input automatically: Configuration is done via Functional Options (SignalOption) and Struct Specs (SupervisorSpec). We adopt the "Stdlib Pattern": providing a `DefaultRouter` for convenience ("Managed Global State") while allowing explicit `NewRouter` for strict isolation.
Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation. To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases. The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the SyncProducer can still sometimes be lost. To consume messages, use the Consumer. Note that Sarama's Consumer implementation does not currently support automatic consumer-group rebalancing and offset tracking. For Zookeeper-based tracking (Kafka 0.8.2 and earlier), the https://github.com/wvanbergen/kafka library builds on Sarama to add this support. For Kafka-based tracking (Kafka 0.9 and later), the https://github.com/bsm/sarama-cluster library builds on Sarama to add this support. For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire; the Client provides higher-level metadata management that is shared between the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
Package websnake provides HTTP handlers for dynamically managing application configuration settings via HTTP requests. It integrates with Viper for configuration management and supports features like type checking, validation hooks, and custom key parsing. Basic usage: With options:
Package qwr provides serialised writes and concurrent reads for SQLite databases. qwr (Query Write Reader) uses a worker pool pattern with a single writer to sequentially queue writes to SQLite while allowing concurrent read operations through a configurable connection pool. qwr provides several write modes: Example writes: Reads use the connection pool and can be executed concurrently: qwr supports SQLite's ATTACH DATABASE for working with multiple database files through a single manager. Attached databases share the main connection pool and write serialiser, enabling cross-database queries and atomic transactions across databases. Attach databases at construction time via the builder: Or at runtime via the manager: Queries reference attached databases using the schema-qualified syntax: Always use schema-qualified table names for attached databases (e.g. analytics.events, not just events). Unqualified names resolve to the main database. Bare :memory: paths are rejected because each pooled connection would get its own isolated in-memory database. Use file::memory:?cache=shared for a shared in-memory attached database. Do not use [Query.Prepared] for schema-qualified queries before the schema is attached - the preparation will fail on every call until the schema exists. For parallel writes to independent databases, use separate Manager instances rather than ATTACH. Attached databases share a single serialised writer, which is correct for cross-database transactions but does not offer write parallelism. Attach is not supported with NewSQL because qwr cannot control connection creation for user-provided database handles. qwr emits structured events for all significant operations. Subscribe to receive events for logging, metrics, tracing, or alerting: Use filters to receive only specific event types: Database profiles configure connection pools and SQLite PRAGMA settings. Pre-configured profiles are available in the profile subpackage: Async operations capture errors in an error queue with automatic retry support for transient failures. Errors are classified by type to determine appropriate retry strategies. qwr uses modernc.org/sqlite by default. Use NewSQL to provide your own database connections with a different driver. Note that NewSQL does not support Attach - manage ATTACH statements on your own connections.
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/pact-foundation/pact-go/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/pact-foundation/pact-ruby/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: The `VerifyProvider` will handle all verifications, treating them as subtests and giving you granular test reporting. If you don't like this behaviour, you may call `VerifyProviderRaw` directly and handle the errors manually. Note that `PactURLs` may be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires running an API endpoint to configure any [provider states](http://docs.pact.io/documentation/provider_states.html) during the verification process. The option you must provide to the dsl.VerifyRequest is: An example route using the standard Go http package might look like this: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so:
Package boost File: boost.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/boost.go Author: Vibe Coding Created: 2025/12/20 12:31:58 Description: Global instance declarations for go-boost library -------------------------------------------------------------------------------- Package boost File: config.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/config.go Author: Vibe Coding Created: 2025/12/20 12:31:58 Description: Config provides functionality for saving and loading configuration -------------------------------------------------------------------------------- Package boost File: debugger.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/debugger.go Author: Vibe Coding Created: 2025/12/20 12:31:58 Description: DEBUGGER is a utility for detecting debuggers attached to the process -------------------------------------------------------------------------------- Package boost File: directory.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/directory.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: Directory handles operations related to the current directory, -------------------------------------------------------------------------------- Package boost File: file.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/file.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: File provides utility methods for file operations -------------------------------------------------------------------------------- Package boost File: filepath.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/filepath.go Author: Vibe Coding Created: 12/20/2025 12:31:58 Description: FilePath provides utility methods for file path operations, -------------------------------------------------------------------------------- Package boost File: http.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/http.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: HTTP provides utility methods for HTTP/HTTPS/WS/WSS operations, -------------------------------------------------------------------------------- Package boost File: json.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/json.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: JSON provides utility methods for JSON operations, including -------------------------------------------------------------------------------- Package boost File: logger.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/logger.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: Logger is a utility for recording and managing application performance metrics and logs. -------------------------------------------------------------------------------- Package boost File: mqtt.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/mqtt.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: MQTT client implementation -------------------------------------------------------------------------------- Package boost File: network.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/network.go Author: Vibe Coding Created: 01/07/2026 11:03:46 Description: NETWORK provides functions to get network IP addresses with the smallest metric. -------------------------------------------------------------------------------- Package boost File: os.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/os.go Author: Vibe Coding Created: 12/20/2025 12:31:58 Description: OS provides utility methods for operating system operations, -------------------------------------------------------------------------------- Package boost File: process.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/process.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: Process handles operations related to the current process -------------------------------------------------------------------------------- Package boost File: serve.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/serve.go Author: TRAE AI Created: 12/30/2025 11:03:46 Description: SERVE provides HTTP and WebSocket server functionality with automatic TLS support -------------------------------------------------------------------------------- Package boost File: soap.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/soap.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: SOAP provides functionality for SOAP web service calls with support for attachments and headers -------------------------------------------------------------------------------- Package boost File: sqlite.go Url: `https://github.com/xiang-tai-duo/go-boost/blob/master/sqlite.go` Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: SQLite is a wrapper for SQLite instance operations, providing a set of methods for instance management and query execution. -------------------------------------------------------------------------------- Package boost File: string.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/string.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: STRING is a wrapper for string operations, providing a set of methods for string manipulation. -------------------------------------------------------------------------------- Package boost File: websocket.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/websocket.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: WebSocket client functionality for Go applications -------------------------------------------------------------------------------- Package boost File: xml.go Url: https://github.com/xiang-tai-duo/go-boost/blob/master/xml.go Author: Vibe Coding Created: 12/30/2025 11:03:46 Description: XML provides utility methods for XML operations, including marshal, unmarshal, validation, formatting, minification, and file operations. --------------------------------------------------------------------------------
Package llama provides Go bindings for llama.cpp, enabling efficient LLM inference with GPU acceleration and advanced features like prefix caching and speculative decoding. This package wraps llama.cpp's C++ API whilst maintaining Go idioms and safety. Heavy computation stays in optimised C++ code, whilst the Go API provides clean concurrency primitives and resource management. Load a GGUF model and generate text: GPU offloading is enabled by default, automatically using CUDA, ROCm, or Metal depending on your build configuration. The library falls back to CPU if GPU resources are unavailable: The library automatically uses each model's native maximum context length from GGUF metadata, giving you full model capabilities without artificial limits: Models are thread-safe and support concurrent generation requests through an internal context pool: The pool automatically scales between minimum and maximum contexts based on demand, reusing contexts when possible and cleaning up idle ones. Stream tokens as they're generated using a callback: Return false from the callback to stop generation early. The library automatically reuses KV cache entries for matching prompt prefixes, significantly improving performance for conversation-style usage: Prefix caching is enabled by default and includes a last-token refresh optimisation to maintain deterministic generation with minimal overhead (~0.1-0.5ms per call). Accelerate generation using a smaller draft model: The draft model generates candidate tokens that the target model verifies in parallel, reducing overall latency whilst maintaining quality. Fine-tune generation behaviour with sampling parameters: All public methods are thread-safe. The Model type uses an internal RWMutex to protect shared state and coordinates access to the context pool. Multiple goroutines can safely call Generate() concurrently. Always call Close() when finished with a model to free GPU memory and other resources: Close() is safe to call multiple times and will block until all active generation requests complete. This package requires CGO and a C++ compiler. Pre-built llama.cpp libraries are included in the repository for convenience. See the project README for detailed build instructions and GPU acceleration setup.
Package wait provides a waitlist for pooling reusable resources. A List manages a pool of items with two key properties: waiters are served in FIFO order, and item creation is lazy up to a configurable limit. This makes it suitable for expensive resources like database connections where fairness matters and you want to avoid creating more than necessary. Unlike sync.Pool, which is designed for reducing allocation overhead of temporary objects, List bounds resource creation and guarantees FIFO fairness. List trades some throughput for predictable latency. If you don't need fairness or creation limits, a buffered channel is simpler. Checked-out items return to the pool with List.Put or permanently leave it with List.Retire.
Package configservice provides the API client, operations, and parameter types for AWS Config. Config provides a way to keep track of the configurations of all the Amazon Web Services resources associated with your Amazon Web Services account. You can use Config to get the current and historical configurations of each Amazon Web Services resource and also to get information about the relationship between the resources. An Amazon Web Services resource can be an Amazon Compute Cloud (Amazon EC2) instance, an Elastic Block Store (EBS) volume, an elastic network Interface (ENI), or a security group. For a complete list of resources currently supported by Config, see Supported Amazon Web Services resources. You can access and manage Config through the Amazon Web Services Management Console, the Amazon Web Services Command Line Interface (Amazon Web Services CLI), the Config API, or the Amazon Web Services SDKs for Config. This reference guide contains documentation for the Config API and the Amazon Web Services CLI commands that you can use to manage Config. The Config API uses the Signature Version 4 protocol for signing requests. For more information about how to sign a request with this protocol, see Signature Version 4 Signing Process. For detailed information about Config features and their associated actions or commands, as well as how to work with Amazon Web Services Management Console, see What Is Configin the Config Developer Guide.
Package appconfig provides the API client, operations, and parameter types for Amazon AppConfig. AppConfig feature flags and dynamic configurations help software builders quickly and securely adjust application behavior in production environments without full code deployments. AppConfig speeds up software release frequency, improves application resiliency, and helps you address emergent issues more quickly. With feature flags, you can gradually release new capabilities to users and measure the impact of those changes before fully deploying the new capabilities to all users. With operational flags and dynamic configurations, you can update block lists, allow lists, throttling limits, logging verbosity, and perform other operational tuning to quickly respond to issues in production environments. AppConfig is a tool in Amazon Web Services Systems Manager. Despite the fact that application configuration content can vary greatly from application to application, AppConfig supports the following use cases, which cover a broad spectrum of customer needs: Feature flags and toggles - Safely release new capabilities to your customers in a controlled environment. Instantly roll back changes if you experience a problem. Application tuning - Carefully introduce application changes while testing the impact of those changes with users in production environments. Allow list or block list - Control access to premium features or instantly block specific users without deploying new code. Centralized configuration storage - Keep your configuration data organized and consistent across all of your workloads. You can use AppConfig to deploy configuration data stored in the AppConfig hosted configuration store, Secrets Manager, Systems Manager, Parameter Store, or Amazon S3. This section provides a high-level description of how AppConfig works and how you get started. 1. Identify configuration values in code you want to manage in the cloud Before you start creating AppConfig artifacts, we recommend you identify configuration data in your code that you want to dynamically manage using AppConfig. Good examples include feature flags or toggles, allow and block lists, logging verbosity, service limits, and throttling rules, to name a few. If your configuration data already exists in the cloud, you can take advantage of AppConfig validation, deployment, and extension features to further streamline configuration data management. 2. Create an application namespace To create a namespace, you create an AppConfig artifact called an application. An application is simply an organizational construct like a folder. 3. Create environments For each AppConfig application, you define one or more environments. An environment is a logical grouping of targets, such as applications in a Beta or Production environment, Lambda functions, or containers. You can also define environments for application subcomponents, such as the Web , Mobile , and Back-end . You can configure Amazon CloudWatch alarms for each environment. The system monitors alarms during a configuration deployment. If an alarm is triggered, the system rolls back the configuration. 4. Create a configuration profile A configuration profile includes, among other things, a URI that enables AppConfig to locate your configuration data in its stored location and a profile type. AppConfig supports two configuration profile types: feature flags and freeform configurations. Feature flag configuration profiles store their data in the AppConfig hosted configuration store and the URI is simply hosted . For freeform configuration profiles, you can store your data in the AppConfig hosted configuration store or any Amazon Web Services service that integrates with AppConfig, as described in Creating a free form configuration profilein the the AppConfig User Guide. A configuration profile can also include optional validators to ensure your configuration data is syntactically and semantically correct. AppConfig performs a check using the validators when you start a deployment. If any errors are detected, the deployment rolls back to the previous configuration data. 5. Deploy configuration data When you create a new deployment, you specify the following: An application ID A configuration profile ID A configuration version An environment ID where you want to deploy the configuration data A deployment strategy ID that defines how fast you want the changes to take effect When you call the StartDeployment API action, AppConfig performs the following tasks: Retrieves the configuration data from the underlying data store by using the location URI in the configuration profile. Verifies the configuration data is syntactically and semantically correct by using the validators you specified when you created your configuration profile. Caches a copy of the data so it is ready to be retrieved by your application. This cached copy is called the deployed data. 6. Retrieve the configuration You can configure AppConfig Agent as a local host and have the agent poll AppConfig for configuration updates. The agent calls the StartConfigurationSessionand GetLatestConfiguration API actions and caches your configuration data locally. To retrieve the data, your application makes an HTTP call to the localhost server. AppConfig Agent supports several use cases, as described in Simplified retrieval methodsin the the AppConfig User Guide. If AppConfig Agent isn't supported for your use case, you can configure your application to poll AppConfig for configuration updates by directly calling the StartConfigurationSession and GetLatestConfigurationAPI actions. This reference is intended to be used with the AppConfig User Guide.
Package fabricsdk enables Go developers to build solutions that interact with Hyperledger Fabric. pkg/fabsdk: The main package of the Fabric SDK. This package enables creation of contexts based on configuration. These contexts are used by the client packages listed below. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/fabsdk pkg/client/channel: Provides channel transaction capabilities. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/channel pkg/client/event: Provides channel event capabilities. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/event pkg/client/ledger: Enables queries to a channel's underlying ledger. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/ledger pkg/client/resmgmt: Provides resource management capabilities such as installing chaincode. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/resmgmt pkg/client/msp: Enables identity management capability. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/msp Basic workflow In order to support the 'Gateway' programming model, the following package is provided: pkg/gateway: Enables Go developers to build client applications using the Hyperledger Fabric programming model as described in the 'Developing Applications' chapter of the Fabric documentation. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/gateway
Package amqp091 is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the synchronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.PublishWithContext or Channel.Consume. When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html In order to be notified when a connection or channel gets closed, both structures offer the possibility to register channels using Channel.NotifyClose and Connection.NotifyClose functions: No errors will be sent in case of a graceful connection close. In case of a non-graceful closure due to e.g. network issue, or forced connection closure from the Management UI, the error will be notified synchronously by the library. The library sends to notification channels just once. After sending a notification to all channels, the library closes all registered notification channels. After receiving a notification, the application should create and register a new channel. To avoid deadlocks in the library, it is necessary to consume from the channels. This could be done inside a different goroutine with a select listening on the two channels inside a for loop like: It is strongly recommended to use buffered channels to avoid deadlocks inside the library. Using Channel.NotifyPublish allows the caller of the library to be notified, through a go channel, when a message has been received and confirmed by the broker. It's advisable to wait for all Confirmations to arrive before calling Channel.Close or Connection.Close. It is also necessary to consume from this channel until it gets closed. The library sends synchronously to the registered channel. It is advisable to use a buffered channel, with capacity set to the maximum acceptable number of unconfirmed messages. It is important to consume from the confirmation channel at all times, in order to avoid deadlocks in the library. This exports a Client object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and rabbitmq-server in another. Stop & restart RabbitMQ to see how the queue reacts.
Package applicationautoscaling provides the API client, operations, and parameter types for Application Auto Scaling. With Application Auto Scaling, you can configure automatic scaling for the following resources: Amazon AppStream 2.0 fleets Amazon Aurora Replicas Amazon Comprehend document classification and entity recognizer endpoints Amazon DynamoDB tables and global secondary indexes throughput capacity Amazon ECS services Amazon ElastiCache replication groups (Redis OSS and Valkey) and Memcached clusters Amazon EMR clusters Amazon Keyspaces (for Apache Cassandra) tables Lambda function provisioned concurrency Amazon Managed Streaming for Apache Kafka broker storage Amazon Neptune clusters Amazon SageMaker endpoint variants Amazon SageMaker inference components Amazon SageMaker serverless endpoint provisioned concurrency Spot Fleets (Amazon EC2) Pool of WorkSpaces Custom resources provided by your own applications or services To learn more about Application Auto Scaling, see the Application Auto Scaling User Guide. The Application Auto Scaling service API includes three key sets of actions: Register and manage scalable targets - Register Amazon Web Services or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets. Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history. Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the RegisterScalableTargetAPI action for any Application Auto Scaling scalable target. You can suspend and resume (individually or in combination) scale-out activities that are triggered by a scaling policy, scale-in activities that are triggered by a scaling policy, and scheduled scaling.
Package gousb provides an low-level interface to attached USB devices. A Context manages all resources necessary for communicating with USB devices. Through the Context users can iterate over available USB devices. The USB standard defines a mechanism of discovering USB device functionality through descriptors. After the device is attached and initialized by the host stack, it's possible to retrieve its descriptor (the device descriptor). It contains elements such as product and vendor IDs, bus number and device number (address) on the bus. In gousb, the Device struct represents a USB device. The Device struct’s Desc field contains all known information about the device. Among other information in the device descriptor is a list of configuration descriptors, accessible through Device.Desc.Configs. The USB standard allows one physical USB device to switch between different sets of behaviors, or working modes, by selecting one of the offered configs (each device has at least one). This allows the same device to sometimes present itself as e.g. a 3G modem, and sometimes as a flash drive with the drivers for that 3G modem. Configs are mutually exclusive, each device can have only one active config at a time. Switching the active config performs a light-weight device reset. Each config in the device descriptor has a unique identification number. In gousb a device config needs to be selected through Device.Config(num). It returns a Config struct that represents the device in this particular configuration. The configuration descriptor is accessible through Config.Desc. A config descriptor determines the list of available USB interfaces on the device. Each interface is a virtual device within the physical USB device and its active config. There can be many interfaces active concurrently. Interfaces are enumerated sequentially starting from zero. Additionally, each interface comes with a number of alternate settings for the interface, which are somewhat similar to device configs, but on the interface level. Each interface can have only a single alternate setting active at any time. Alternate settings are enumerated sequentially starting from zero. In gousb an interface and its alternate setting can be selected through Config.Interface(num, altNum). The Interface struct is the representation of the claimed interface with a particular alternate setting. The descriptor of the interface is available through Interface.Setting. An interface with a particular alternate setting defines up to 30 data endpoints, each identified by a unique address. The endpoint address is a combination of endpoint number (1..15) and endpoint directionality (IN/OUT). IN endpoints have addresses 0x81..0x8f, while OUT endpoints 0x01..0x0f. An endpoint can be considered similar to a UDP/IP port, except the data transfers are unidirectional. Endpoints are represented by the Endpoint struct, and all defined endpoints can be obtained through the Endpoints field of the Interface.Setting. Each endpoint descriptor (EndpointDesc) defined in the interface's endpoint map includes information about the type of the endpoint: - endpoint address - endpoint number - direction: IN (device-to-host) or OUT (host-to-device) - transfer type: USB standard defines a few distinct data transfer types: --- bulk - high throughput, but no guaranteed bandwidth and no latency guarantees, --- isochronous - medium throughput, guaranteed bandwidth, some latency guarantees, --- interrupt - low throughput, high latency guarantees. The endpoint descriptor determines the type of the transfer that will be used. - maximum packet size: maximum number of bytes that can be sent or received by the device in a single USB transaction. and a few other less frequently used pieces of endpoint information. An IN Endpoint can be opened for reading through Interface.InEndpoint(epNum), while an OUT Endpoint can be opened for writing through Interface.OutEndpoint(epNum). An InEndpoint implements the io.Reader interface, an OutEndpoint implements the io.Writer interface. Both Reads and Writes will accept larger slices of data than the endpoint's maximum packet size, the transfer will be split into smaller USB transactions as needed. But using Read/Write size equal to an integer multiple of maximum packet size helps with improving the transfer performance. Apart from 15 possible data endpoints, each USB device also has a control endpoint. The control endpoint is present regardless of the current device config, claimed interfaces and their alternate settings. It makes a lot of sense, as the control endpoint is actually used, among others, to issue commands to switch the active config or select an alternate setting for an interface. Control commands are also often used to control the behavior of the device. There is no single standard for control commands though, and many devices implement their custom control command schema. Control commands can be issued through Device.Control(). For more information about USB protocol and handling USB devices, see the excellent "USB in a nutshell" guide: http://www.beyondlogic.org/usbnutshell/ This example demostrates the full API for accessing endpoints. It opens a device with a known VID/PID, switches the device to configuration #2, in that configuration it opens (claims) interface #3 with alternate setting #0. Within that interface setting it opens an IN endpoint number 6 and an OUT endpoint number 5, then starts copying data between them, This examples demonstrates the use of a few convenience functions that can be used in simple situations and with simple devices. It opens a device with a given VID/PID, claims the default interface (use the same config as currently active, interface 0, alternate setting 0) and tries to write 5 bytes of data to endpoint number 7.
Package datapipeline provides the API client, operations, and parameter types for AWS Data Pipeline. AWS Data Pipeline configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so that your application can focus on processing the data. AWS Data Pipeline provides a JAR implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management. AWS Data Pipeline implements two main sets of functionality. Use the first set to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. Use the second set in your task runner application to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service.
Package iot provides the API client, operations, and parameter types for AWS IoT. IoT provides secure, bi-directional communication between Internet-connected devices (such as sensors, actuators, embedded devices, or smart appliances) and the Amazon Web Services cloud. You can discover your custom IoT-Data endpoint to communicate with, configure rules for data processing and integration with other services, organize resources associated with each device (Registry), configure logging, and create and manage policies and credentials to authenticate devices. The service endpoints that expose this API are listed in Amazon Web Services IoT Core Endpoints and Quotas. You must use the endpoint for the region that has the resources you want to access. The service name used by Amazon Web Services Signature Version 4 to sign the request is: execute-api. For more information about how IoT works, see the Developer Guide. For information about how to use the credentials provider for IoT, see Authorizing Direct Calls to Amazon Web Services Services.
Glide is a command line utility that manages Go project dependencies. Configuration of where to start is managed via a glide.yaml in the root of a project. The yaml A glide.yaml file looks like: Glide puts dependencies in a vendor directory. Go utilities require this to be in your GOPATH. Glide makes this easy. For more information use the `glide help` command or see https://glide.sh
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/pact-foundation/pact-go/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/pact-foundation/pact-ruby/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: The `VerifyProvider` will handle all verifications, treating them as subtests and giving you granular test reporting. If you don't like this behaviour, you may call `VerifyProviderRaw` directly and handle the errors manually. Note that `PactURLs` may be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires running an API endpoint to configure any [provider states](http://docs.pact.io/documentation/provider_states.html) during the verification process. The option you must provide to the dsl.VerifyRequest is: An example route using the standard Go http package might look like this: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so:
Package batch provides the API client, operations, and parameter types for AWS Batch. Using Batch, you can run batch computing workloads on the Amazon Web Services Cloud. Batch computing is a common means for developers, scientists, and engineers to access large amounts of compute resources. Batch uses the advantages of the batch computing to remove the undifferentiated heavy lifting of configuring and managing required infrastructure. At the same time, it also adopts a familiar batch computing software approach. You can use Batch to efficiently provision resources, and work toward eliminating capacity constraints, reducing your overall compute costs, and delivering results more quickly. As a fully managed service, Batch can run batch computing workloads of any scale. Batch automatically provisions compute resources and optimizes workload distribution based on the quantity and scale of your specific workloads. With Batch, there's no need to install or manage batch computing software. This means that you can focus on analyzing results and solving your specific problems instead.