Package ws implements a client and server for the WebSocket protocol as specified in RFC 6455. The main purpose of this package is to provide simple low-level API for efficient work with protocol. Overview. Upgrade to WebSocket (or WebSocket handshake) can be done in two ways. The first way is to use `net/http` server: The second and much more efficient way is so-called "zero-copy upgrade". It avoids redundant allocations and copying of not used headers or other request data. User decides by himself which data should be copied. For customization details see `ws.Upgrader` documentation. After WebSocket handshake you can work with connection in multiple ways. That is, `ws` does not force the only one way of how to work with WebSocket: As you can see, it stream friendly: Or: For more info see the documentation.
Package imds provides the API client for interacting with the Amazon EC2 Instance Metadata Service. All Client operation calls have a default timeout. If the operation is not completed before this timeout expires, the operation will be canceled. This timeout can be overridden through the following: See the EC2 IMDS user guide for more information on using the API. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
Package presignedurl provides the customizations for API clients to fill in presigned URLs into input parameters.
Package ssh wraps the crypto/ssh package with a higher-level API for building SSH servers. The goal of the API was to make it as simple as using net/http, so the API is very similar. You should be able to build any SSH server using only this package, which wraps relevant types and some functions from crypto/ssh. However, you still need to use crypto/ssh for building SSH clients. ListenAndServe starts an SSH server with a given address, handler, and options. The handler is usually nil, which means to use DefaultHandler. Handle sets DefaultHandler: If you don't specify a host key, it will generate one every time. This is convenient except you'll have to deal with clients being confused that the host key is different. It's a better idea to generate or point to an existing key on your system: Although all options have functional option helpers, another way to control the server's behavior is by creating a custom Server: This package automatically handles basic SSH requests like setting environment variables, requesting PTY, and changing window size. These requests are processed, responded to, and any relevant state is updated. This state is then exposed to you via the Session interface. The one big feature missing from the Session abstraction is signals. This was started, but not completed. Pull Requests welcome!
Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation. To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases. The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the SyncProducer can still sometimes be lost. To consume messages, use the Consumer. Note that Sarama's Consumer implementation does not currently support automatic consumer-group rebalancing and offset tracking. For Zookeeper-based tracking (Kafka 0.8.2 and earlier), the https://github.com/wvanbergen/kafka library builds on Sarama to add this support. For Kafka-based tracking (Kafka 0.9 and later), the https://github.com/bsm/sarama-cluster library builds on Sarama to add this support. For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire; the Client provides higher-level metadata management that is shared between the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry. Broker related metrics: Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics. Producer related metrics:
Package translate is the v2 client for the Google Translation API. PLEASE NOTE: We recommend using the new v3 client for new projects: https://cloud.google.com/go/translate/apiv3. See https://cloud.google.com/translation for details.
Package elasticsearch provides a Go client for Elasticsearch. Create the client with the NewDefaultClient function: The ELASTICSEARCH_URL environment variable is used instead of the default URL, when set. Use a comma to separate multiple URLs. To configure the client, pass a Config object to the NewClient function: When using the Elastic Service (https://elastic.co/cloud), you can use CloudID instead of Addresses. When either Addresses or CloudID is set, the ELASTICSEARCH_URL environment variable is ignored. See the elasticsearch_integration_test.go file and the _examples folder for more information. Call the Elasticsearch APIs by invoking the corresponding methods on the client: See the github.com/elastic/go-elasticsearch/esapi package for more information about using the API. See the github.com/elastic/go-elasticsearch/estransport package for more information about configuring the transport.
Package sdk provides Go packages for managing and using Azure services. GitHub repo: https://github.com/Azure/azure-sdk-for-go Official documentation: https://docs.microsoft.com/azure/go API reference: https://godoc.org/github.com/Azure/azure-sdk-for-go Samples: https://github.com/Azure-Samples/azure-sdk-for-go-samples
Package acceptencoding provides customizations associated with Accept Encoding Header. The Go HTTP client automatically supports accept-encoding and content-encoding gzip by default. This default behavior is not desired by the SDK, and prevents validating the response body's checksum. To prevent this the SDK must manually control usage of content-encoding gzip. To control content-encoding, the SDK must always set the `Accept-Encoding` header to a value. This prevents the HTTP client from using gzip automatically. When gzip is enabled on the API client, the SDK's customization will control decompressing the gzip data in order to not break the checksum validation. When gzip is disabled, the API client will disable gzip, preventing the HTTP client's default behavior. An `EnableAcceptEncodingGzip` option may or may not be present depending on the client using the below middleware. The option if present can be used to enable auto decompressing gzip by the SDK.
Package ssooidc provides the API client, operations, and parameter types for AWS SSO OIDC. IAM Identity Center OpenID Connect (OIDC) is a web service that enables a client (such as CLI or a native application) to register with IAM Identity Center. The service also enables the client to fetch the user’s access token upon successful authentication and authorization with IAM Identity Center. IAM Identity Center uses the sso and identitystore API namespaces. Before you begin using this guide, we recommend that you first review the following important information about how the IAM Identity Center OIDC service works. The IAM Identity Center OIDC service currently implements only the portions of the OAuth 2.0 Device Authorization Grant standard (https://tools.ietf.org/html/rfc8628 ) that are necessary to enable single sign-on authentication with the CLI. With older versions of the CLI, the service only emits OIDC access tokens, so to obtain a new token, users must explicitly re-authenticate. To access the OIDC flow that supports token refresh and doesn’t require re-authentication, update to the latest CLI version (1.27.10 for CLI V1 and 2.9.0 for CLI V2) with support for OIDC token refresh and configurable IAM Identity Center session durations. For more information, see Configure Amazon Web Services access portal session duration. The access tokens provided by this service grant access to all Amazon Web Services account entitlements assigned to an IAM Identity Center user, not just a particular application. The documentation in this guide does not describe the mechanism to convert the access token into Amazon Web Services Auth (“sigv4”) credentials for use with IAM-protected Amazon Web Services service endpoints. For more information, see GetRoleCredentialsin the IAM Identity Center Portal API Reference Guide. For general information about IAM Identity Center, see What is IAM Identity Center? in the IAM Identity Center User Guide.
Package s3 provides the API client, operations, and parameter types for Amazon Simple Storage Service.
Package sessions provides cookie and filesystem sessions and infrastructure for custom session backends. The key features are: Let's start with an example that shows the sessions API in a nutshell: First we initialize a session store calling NewCookieStore() and passing a secret key used to authenticate the session. Inside the handler, we call store.Get() to retrieve an existing session or a new one. Then we set some session values in session.Values, which is a map[interface{}]interface{}. And finally we call session.Save() to save the session in the response. Note that in production code, we should check for errors when calling session.Save(r, w), and either display an error message or otherwise handle it. Save must be called before writing to the response, otherwise the session cookie will not be sent to the client. That's all you need to know for the basic usage. Let's take a look at other options, starting with flash messages. Flash messages are session values that last until read. The term appeared with Ruby On Rails a few years back. When we request a flash message, it is removed from the session. To add a flash, call session.AddFlash(), and to get all flashes, call session.Flashes(). Here is an example: Flash messages are useful to set information to be read after a redirection, like after form submissions. There may also be cases where you want to store a complex datatype within a session, such as a struct. Sessions are serialised using the encoding/gob package, so it is easy to register new datatypes for storage in sessions: As it's not possible to pass a raw type as a parameter to a function, gob.Register() relies on us passing it a value of the desired type. In the example above we've passed it a pointer to a struct and a pointer to a custom type representing a map[string]interface. (We could have passed non-pointer values if we wished.) This will then allow us to serialise/deserialise values of those types to and from our sessions. Note that because session values are stored in a map[string]interface{}, there's a need to type-assert data when retrieving it. We'll use the Person struct we registered above: By default, session cookies last for a month. This is probably too long for some cases, but it is easy to change this and other attributes during runtime. Sessions can be configured individually or the store can be configured and then all sessions saved using it will use that configuration. We access session.Options or store.Options to set a new configuration. The fields are basically a subset of http.Cookie fields. Let's change the maximum age of a session to one week: Sometimes we may want to change authentication and/or encryption keys without breaking existing sessions. The CookieStore supports key rotation, and to use it you just need to set multiple authentication and encryption keys, in pairs, to be tested in order: New sessions will be saved using the first pair. Old sessions can still be read because the first pair will fail, and the second will be tested. This makes it easy to "rotate" secret keys and still be able to validate existing sessions. Note: for all pairs the encryption key is optional; set it to nil or omit it and and encryption won't be used. Multiple sessions can be used in the same request, even with different session backends. When this happens, calling Save() on each session individually would be cumbersome, so we have a way to save all sessions at once: it's sessions.Save(). Here's an example: This is possible because when we call Get() from a session store, it adds the session to a common registry. Save() uses it to save all registered sessions.
Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients. The middleware consists of three components. A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as a method on a type for a stateful Policy. Note that HTTP requests made via the same pipeline share the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to avoid race conditions. A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers, and/or query parameters, inject a failure, etc. Once the Policy has successfully completed its request work, it must call the Next() method on the *policy.Request instance in order to pass the request to the next Policy in the chain. When an HTTP response comes back, the Policy then gets a chance to process the response/error. The Policy instance can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response body, etc. Once the Policy has successfully completed its response work, it must return the *http.Response and error instances to its caller. Template for implementing a stateless Policy: Template for implementing a stateful Policy: The Transporter interface is responsible for sending the HTTP request and returning the corresponding HTTP response or error. The Transporter is invoked by the last Policy in the chain. The default Transporter implementation uses a shared http.Client from the standard library. The same stateful/stateless rules for Policy implementations apply to Transporter implementations. To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function. The specified Policy instances form a chain and are invoked in the order provided to NewPipeline followed by the Transporter. Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method. The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter instances. The response/error is then sent through the same chain of Policy instances in reverse order. For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with TransportA. The flow of Request and Response looks like the following: The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request. It also contains some internal state and provides various convenience methods. You create a Request instance by calling the runtime.NewRequest function: If the Request should contain a body, call the SetBody method. A seekable stream is required so that upon retry, the retry Policy instance can seek the stream back to the beginning before retrying the network request and re-uploading the body. Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted. This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as a means to resolve the ambiguity between a field to be excluded and its zero-value. In the above example, Name and Count are defined as pointer-to-type to disambiguate between a missing value (nil) and a zero-value (0) which might have semantic differences. In a PATCH operation, any fields left as nil are to have their values preserved. When updating a Widget's count, one simply specifies the new value for Count, leaving Name nil. To fulfill the requirement for sending a JSON null, the NullValue() function can be used. This sends an explict "null" for Count, indicating that any current value for Count should be deleted. When the HTTP response is received, the *http.Response is returned directly. Each Policy instance can inspect/mutate the *http.Response. To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program. By default the logger writes to stderr. This can be customized by calling log.SetListener, providing a callback that writes to the desired location. Any custom logging implementation MUST provide its own synchronization to handle concurrent invocations. See the docs for the log package for further details. Pageable operations return potentially large data sets spread over multiple GET requests. The result of each GET is a "page" of data consisting of a slice of items. Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T]. The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages and determining if there are more pages to fetch. No IO calls are made until the NextPage() method is invoked. Long-running operations (LROs) are operations consisting of an initial request to start the operation followed by polling to determine when the operation has reached a terminal state. An LRO's terminal state is one of the following values. LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T]. When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started. It does _not_ mean that the widget has been created or updated (or failed to be created/updated). The *runtime.Poller[T] provides APIs for determining the state of the LRO. To wait for the LRO to complete, call the PollUntilDone() method. The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the context is canceled/timed out. Note that LROs can take anywhere from several seconds to several minutes. The duration is operation-dependent. Due to this variant behavior, pollers do _not_ have a preconfigured time-out. Use a context with the appropriate cancellation mechanism as required. Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to recreate the poller. This is achieved via the runtime.Poller[T].ResumeToken() method. Note that a token can only be obtained for a poller that's in a non-terminal state. Also note that any subsequent calls to poller.Poll() might change the poller's state. In this case, a new token should be created. After the token has been obtained, it can be used to recreate an instance of the originating poller. When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken. Resume tokens are unique per service client and operation. Attempting to resume a poller for LRO BeginB() with a token from LRO BeginA() will result in an error. The fake package contains types used for constructing in-memory fake servers used in unit tests. This allows writing tests to cover various success/error conditions without the need for connecting to a live service. Please see https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes.
Package buffalo is a Go web development eco-system, designed to make your life easier. Buffalo helps you to generate a web project that already has everything from front-end (JavaScript, SCSS, etc.) to back-end (database, routing, etc.) already hooked up and ready to run. From there it provides easy APIs to build your web application quickly in Go. Buffalo **isn't just a framework**, it's a holistic web development environment and project structure that **lets developers get straight to the business** of, well, building their business.
Package tcell provides a lower-level, portable API for building programs that interact with terminals or consoles. It works with both common (and many uncommon!) terminals or terminal emulators, and Windows console implementations. It provides support for up to 256 colors, text attributes, and box drawing elements. A database of terminals built from a real terminfo database is provided, along with code to generate new database entries. Tcell offers very rich support for mice, dependent upon the terminal of course. (Windows, XTerm, and iTerm 2 are known to work very well.) If the environment is not Unicode by default, such as an ISO8859 based locale or GB18030, Tcell can convert input and output, so that your terminal can operate in whatever locale is most convenient, while the application program can just assume "everything is UTF-8". Reasonable defaults are used for updating characters to something suitable for display. Unicode box drawing characters will be converted to use the alternate character set of your terminal, if native conversions are not available. If no ACS is available, then some ASCII fallbacks will be used. Note that support for non-UTF-8 locales (other than C) must be enabled by the application using RegisterEncoding() -- we don't have them all enabled by default to avoid bloating the application unnecessarily. (These days UTF-8 is good enough for almost everyone, and nobody should be using legacy locales anymore.) Also, actual glyphs for various code point will only be displayed if your terminal or emulator (or the font the emulator is using) supports them. A rich set of key codes is supported, with support for up to 65 function keys, and various other special keys.
This example shows how to cache service principal authentication data persistently to make it accessible to multiple processes. The example uses ClientCertificateCredential, however the pattern is the same for all service principal credential types having a Cache field in their options. The key steps are: Credentials that authenticate users such as InteractiveBrowserCredential have a different pattern; see the persistent user authentication example. This example shows how to cache authentication data persistently so a user doesn't need to authenticate interactively every time the application runs. The example uses InteractiveBrowserCredential, however DeviceCodeCredential has the same API. The key steps are: This examples applies to credentials that authenticate users. For credentials authenticating service principal, see the persistent service principal authentication example. This example demonstrates how to use azidentity to authenticate a go-redis client connecting to Azure Cache for Redis. See the Azure Cache for Redis documentation for information on configuring a cache to use Entra ID authentication.
Package certmagic automates the obtaining and renewal of TLS certificates, including TLS & HTTPS best practices such as robust OCSP stapling, caching, HTTP->HTTPS redirects, and more. Its high-level API serves your HTTP handlers over HTTPS if you simply give the domain name(s) and the http.Handler; CertMagic will create and run the HTTPS server for you, fully managing certificates during the lifetime of the server. Similarly, it can be used to start TLS listeners or return a ready-to-use tls.Config -- whatever layer you need TLS for, CertMagic makes it easy. See the HTTPS, Listen, and TLS functions for that. If you need more control, create a Cache using NewCache() and then make a Config using New(). You can then call Manage() on the config. But if you use this lower-level API, you'll have to be sure to solve the HTTP and TLS-ALPN challenges yourself (unless you disabled them or use the DNS challenge) by using the provided Config.GetCertificate function in your tls.Config and/or Config.HTTPChallangeHandler in your HTTP handler. See the package's README for more instruction.
Package ecr provides the API client, operations, and parameter types for Amazon Elastic Container Registry. Amazon Elastic Container Registry (Amazon ECR) is a managed container image registry service. Customers can use the familiar Docker CLI, or their preferred client, to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry for your Docker or Open Container Initiative (OCI) images. Amazon ECR supports private repositories with resource-based permissions using IAM so that specific users or Amazon EC2 instances can access repositories and images. Amazon ECR has service endpoints in each supported Region. For more information, see Amazon ECR endpointsin the Amazon Web Services General Reference.
Package elasticsearch provides a Go client for Elasticsearch. Create the client with the NewDefaultClient function: The ELASTICSEARCH_URL environment variable is used instead of the default URL, when set. Use a comma to separate multiple URLs. To configure the client, pass a Config object to the NewClient function: When using the Elastic Service (https://elastic.co/cloud), you can use CloudID instead of Addresses. When either Addresses or CloudID is set, the ELASTICSEARCH_URL environment variable is ignored. See the elasticsearch_integration_test.go file and the _examples folder for more information. Call the Elasticsearch APIs by invoking the corresponding methods on the client: See the github.com/elastic/go-elasticsearch/esapi package for more information about using the API. See the github.com/elastic/elastic-transport-go package for more information about configuring the transport.
Package certmagic automates the obtaining and renewal of TLS certificates, including TLS & HTTPS best practices such as robust OCSP stapling, caching, HTTP->HTTPS redirects, and more. Its high-level API serves your HTTP handlers over HTTPS if you simply give the domain name(s) and the http.Handler; CertMagic will create and run the HTTPS server for you, fully managing certificates during the lifetime of the server. Similarly, it can be used to start TLS listeners or return a ready-to-use tls.Config -- whatever layer you need TLS for, CertMagic makes it easy. See the HTTPS, Listen, and TLS functions for that. If you need more control, create a Cache using NewCache() and then make a Config using New(). You can then call Manage() on the config. But if you use this lower-level API, you'll have to be sure to solve the HTTP and TLS-ALPN challenges yourself (unless you disabled them or use the DNS challenge) by using the provided Config.GetCertificate function in your tls.Config and/or Config.HTTPChallangeHandler in your HTTP handler. See the package's README for more instruction.
Package opentracing implements a bridge that forwards OpenTracing API calls to the OpenTelemetry SDK. To use the bridge, first create an OpenTelemetry tracer of choice. Then use the NewTracerPair() function to create two tracers - one implementing OpenTracing API (BridgeTracer) and one that implements the OpenTelemetry API (WrapperTracer) and mostly forwards the calls to the OpenTelemetry tracer of choice, but does some extra steps to make the interaction between both APIs working. If the OpenTelemetry tracer of choice already knows how to cooperate with OpenTracing API through the OpenTracing bridge (explained in detail below), then it is fine to skip the WrapperTracer by calling the NewBridgeTracer() function to get the bridge tracer and then passing the chosen OpenTelemetry tracer to the SetOpenTelemetryTracer() function of the bridge tracer. To use an OpenTelemetry span as the parent of an OpenTracing span, create a context using the ContextWithBridgeSpan() function of the bridge tracer, and then use the StartSpanFromContext function of the OpenTracing API. Bridge tracer also allows the user to install a warning handler through the SetWarningHandler() function. The warning handler will be called when there is some misbehavior of the OpenTelemetry tracer with regard to the cooperation with the OpenTracing API. For an OpenTelemetry tracer to cooperate with OpenTracing API through the BridgeTracer, the OpenTelemetry tracer needs to (reasoning is below the list): 1. Return the same context it received in the Start() function if migration.SkipContextSetup() returns true. 2. Implement the migration.DeferredContextSetupTracerExtension interface. The implementation should setup the context it would normally do in the Start() function if the migration.SkipContextSetup() function returned false. Calling ContextWithBridgeSpan() is not necessary. 3. Have an access to the BridgeTracer instance. 4. If the migration.SkipContextSetup() function returned false, the tracer should use the ContextWithBridgeSpan() function to install the created span as an active OpenTracing span. There are some differences between OpenTracing and OpenTelemetry APIs, especially with regard to Go context handling. When a span is created with an OpenTracing API (through the StartSpan() function) the Go context is not available. BridgeTracer has access to the OpenTelemetry tracer of choice, so in the StartSpan() function BridgeTracer translates the parameters to the OpenTelemetry version and uses the OpenTelemetry tracer's Start() function to actually create a span. The OpenTelemetry Start() function takes the Go context as a parameter, so BridgeTracer at this point passes a temporary context to Start(). All the changes to the temporary context will be lost at the end of the StartSpan() function, so the OpenTelemetry tracer of choice should not do anything with the context. If the returned context is different, BridgeTracer will warn about it. The OpenTelemetry tracer of choice can learn about this situation by using the migration.SkipContextSetup() function. The tracer will receive an opportunity to set up the context at a later stage. Usually after StartSpan() is finished, users of the OpenTracing API are calling (either directly or through the opentracing.StartSpanFromContext() helper function) the opentracing.ContextWithSpan() function to insert the created OpenTracing span into the context. At that time, the OpenTelemetry tracer of choice has a chance of setting up the context through a hook invoked inside the opentracing.ContextWithSpan() function. For that to happen, the tracer should implement the migration.DeferredContextSetupTracerExtension interface. This so far explains the need for points 1. and 2. When the span is created with the OpenTelemetry API (with the Start() function) then migration.SkipContextSetup() will return false. This means that the tracer can do the usual setup of the context, but it also should set up the active OpenTracing span in the context. This is because OpenTracing API is not used at all in the creation of the span, but the OpenTracing API may be used during the time when the created OpenTelemetry span is current. For this case to work, we need to also set up active OpenTracing span in the context. This can be done with the ContextWithBridgeSpan() function. This means that the OpenTelemetry tracer of choice needs to have an access to the BridgeTracer instance. This should explain the need for points 3. and 4. Another difference related to the Go context handling is in logging - OpenTracing API does not take a context parameter in the LogFields() function, so when the call to the function gets translated to OpenTelemetry AddEvent() function, an empty context is passed.
Package color is an ANSI color package to output colorized or SGR defined output to the standard output. The API can be used in several way, pick one that suits you. Use simple and default helper functions with predefined foreground colors: However there are times where custom color mixes are required. Below are some examples to create custom color objects and use the print functions of each separate color object. You can create PrintXxx functions to simplify even more: You can also FprintXxx functions to pass your own io.Writer: Or create SprintXxx functions to mix strings with other non-colorized strings: Windows support is enabled by default. All Print functions work as intended. However only for color.SprintXXX functions, user should use fmt.FprintXXX and set the output to color.Output: Using with existing code is possible. Just use the Set() method to set the standard output to the given parameters. That way a rewrite of an existing code is not required. There might be a case where you want to disable color output (for example to pipe the standard output of your app to somewhere else). `Color` has support to disable colors both globally and for single color definition. For example suppose you have a CLI app and a `--no-color` bool flag. You can easily disable the color output with: It also has support for single color definitions (local). You can disable/enable color output on the fly:
Package ec2 provides the API client, operations, and parameter types for Amazon Elastic Compute Cloud. You can access the features of Amazon Elastic Compute Cloud (Amazon EC2) programmatically. For more information, see the Amazon EC2 Developer Guide.
SQL Schema migration tool for Go. Key features: To install the library and command line program, use the following: The main command is called sql-migrate. Each command requires a configuration file (which defaults to dbconfig.yml, but can be specified with the -config flag). This config file should specify one or more environments: The `table` setting is optional and will default to `gorp_migrations`. The environment that will be used can be specified with the -env flag (defaults to development). Use the --help flag in combination with any of the commands to get an overview of its usage: The up command applies all available migrations. By contrast, down will only apply one migration by default. This behavior can be changed for both by using the -limit parameter. The redo command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations. Use the status command to see the state of the applied migrations: If you are using MySQL, you must append ?parseTime=true to the datasource configuration. For example: See https://github.com/go-sql-driver/mysql#parsetime for more information. Import sql-migrate into your application: Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later): Then use the Exec function to upgrade your database: Note that n can be greater than 0 even if there is an error: any migration that succeeded will remain applied even if a later one fails. The full set of capabilities can be found in the API docs below. Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations. You can put multiple statements in each block, as long as you end them with a semicolon (;). If you have complex statements which contain semicolons, use StatementBegin and StatementEnd to indicate boundaries: The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename. Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the notransaction option: If you like your Go applications self-contained (that is: a single binary): use packr (https://github.com/gobuffalo/packr) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Use the PackrMigrationSource in your application to find the migrations: If you already have a box and would like to use a subdirectory: As an alternative, but slightly less maintained, you can use bindata (https://github.com/shuLhan/go-bindata) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Then use bindata to generate a .go file with the migrations embedded: The resulting bindata.go file will contain your migrations. Remember to regenerate your bindata.go file whenever you add/modify a migration (go generate will help here, once it arrives). Use the AssetMigrationSource in your application to find the migrations: Both Asset and AssetDir are functions provided by bindata. Then proceed as usual. Adding a new migration source means implementing MigrationSource. The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the Id field.
Package websocket implements the RFC 6455 WebSocket protocol. Deprecated: coder now maintains this library at https://github.com/coder/websocket. https://tools.ietf.org/html/rfc6455 Use Dial to dial a WebSocket server. Use Accept to accept a WebSocket client. Conn represents the resulting WebSocket connection. The examples are the best way to understand how to correctly use the library. The wsjson subpackage contain helpers for JSON and protobuf messages. More documentation at https://nhooyr.io/websocket. The client side supports compiling to Wasm. It wraps the WebSocket browser API. See https://developer.mozilla.org/en-US/docs/Web/API/WebSocket Some important caveats to be aware of: This example demonstrates a echo server. This example demonstrates full stack chat with an automated test.
Gomega is the Ginkgo BDD-style testing framework's preferred matcher library. The godoc documentation describes Gomega's API. More comprehensive documentation (with examples!) is available at http://onsi.github.io/gomega/ Gomega on Github: http://github.com/onsi/gomega Learn more about Ginkgo online: http://onsi.github.io/ginkgo Ginkgo on Github: http://github.com/onsi/ginkgo Gomega is MIT-Licensed
Package route53 provides the API client, operations, and parameter types for Amazon Route 53. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. You can use Route 53 to: For more information, see How domain registration works. For more information, see How internet traffic is routed to your website or web application. For more information, see How Route 53 checks the health of your resources.
Package ecrpublic provides the API client, operations, and parameter types for Amazon Elastic Container Registry Public. Amazon Elastic Container Registry Public (Amazon ECR Public) is a managed container image registry service. Amazon ECR provides both public and private registries to host your container images. You can use the Docker CLI or your preferred client to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry for your Docker or Open Container Initiative (OCI) images. Amazon ECR supports public repositories with this API. For information about the Amazon ECR API for private repositories, see Amazon Elastic Container Registry API Reference.
Package kms provides the API client, operations, and parameter types for AWS Key Management Service. Key Management Service (KMS) is an encryption and key management web service. This guide describes the KMS operations that you can call programmatically. For general information about KMS, see the Key Management Service Developer Guide. KMS has replaced the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term. Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, macOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to KMS and other Amazon Web Services services. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services. We recommend that you use the Amazon Web Services SDKs to make programmatic API calls to KMS. If you need to use FIPS 140-2 validated cryptographic modules when communicating with Amazon Web Services, use the FIPS endpoint in your preferred Amazon Web Services Region. For more information about the available FIPS endpoints, see Service endpointsin the Key Management Service topic of the Amazon Web Services General Reference. All KMS API calls must be signed and be transmitted using Transport Layer Security (TLS). KMS recommends you always use the latest supported TLS version. Clients must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Requests must be signed using an access key ID and a secret access key. We strongly recommend that you do not use your Amazon Web Services account root access key ID and secret access key for everyday work. You can use the access key ID and secret access key for an IAM user or you can use the Security Token Service (STS) to generate temporary security credentials and use those to sign requests. All KMS requests must be signed with Signature Version 4. KMS supports CloudTrail, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the CloudTrail User Guide. For more information about credentials and request signing, see the following: Amazon Web Services Security Credentials Temporary Security Credentials Signature Version 4 Signing Process Of the API operations discussed in this guide, the following will prove the most useful for most applications. You will likely perform operations other than these, such as creating keys and assigning policies, by using the console.
Package gg provides a simple API for rendering 2D graphics in pure Go.
Package cloudwatch provides the API client, operations, and parameter types for Amazon CloudWatch. Amazon CloudWatch monitors your Amazon Web Services (Amazon Web Services) resources and the applications you run on Amazon Web Services in real time. You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and applications. CloudWatch alarms send notifications or automatically change the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances. Then, use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with Amazon Web Services, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.
Package dynamodb provides the API client, operations, and parameter types for Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability.
Package ebiten provides graphics and input API to develop a 2D game. You can start the game by calling the function RunGame. In the API document, 'the main thread' means the goroutine in init(), main() and their callees without 'go' statement. It is assured that 'the main thread' runs on the OS main thread. There are some Ebiten functions that must be called on the main thread under some conditions (typically, before ebiten.RunGame is called). `EBITEN_SCREENSHOT_KEY` environment variable specifies the key to take a screenshot. For example, if you run your game with `EBITEN_SCREENSHOT_KEY=q`, you can take a game screen's screenshot by pressing Q key. This works only on desktops. `EBITEN_INTERNAL_IMAGES_KEY` environment variable specifies the key to dump all the internal images. This is valid only when the build tag 'ebitendebug' is specified. This works only on desktops. `ebitendebug` outputs a log of graphics commands. This is useful to know what happens in Ebiten. In general, the number of graphics commands affects the performance of your game. `ebitengl` forces to use OpenGL in any environments.
Package sqs provides the API client, operations, and parameter types for Amazon Simple Queue Service. Welcome to the Amazon SQS API Reference. Amazon SQS is a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. Amazon SQS moves data between distributed application components and helps you decouple these components. For information on the permissions you need to use this API, see Identity and access management in the Amazon SQS Developer Guide. You can use Amazon Web Services SDKs to access Amazon SQS using your favorite programming language. The SDKs perform tasks such as the following automatically: Cryptographically sign your service requests Retry requests Handle error responses Amazon SQS Product Page Making API Requests Amazon SQS Message Attributes Amazon SQS Dead-Letter Queues Amazon SQS in the Command Line Interface Regions and Endpoints
Package endpointdiscovery provides a feature implemented in the AWS SDK for Go V2 that allows client to fetch a valid endpoint to serve an API request. Discovered endpoints are stored in an internal thread-safe cache to reduce the number of calls made to fetch the endpoint. Endpoint discovery stores endpoint by associating to a generated cache key. Cache key is built using service-modeled sdkId and any service-defined input identifiers provided by the customer. Endpoint cache keys follow the grammar: The endpoint discovery cache implementation is internal. Clients resolves the cache size to 10 entries. Each entry may contain multiple host addresses as returned by the service. Each discovered endpoint has a TTL associated to it, and are evicted from cache lazily i.e. when client tries to retrieve an endpoint but finds an expired entry instead. Endpoint discovery feature can be turned on by setting the `AWS_ENABLE_ENDPOINT_DISCOVERY` env variable to TRUE. By default, the feature is set to AUTO - indicating operations that require endpoint discovery always use it. To completely turn off the feature, one should set the value as FALSE. Similar configuration rules apply for shared config file where key is `endpoint_discovery_enabled`.
Package dynamodbstreams provides the API client, operations, and parameter types for Amazon DynamoDB Streams. Amazon DynamoDB Streams provides API actions for accessing streams and processing stream records. To learn more about application development with Streams, see Capturing Table Activity with DynamoDB Streamsin the Amazon DynamoDB Developer Guide.
Package ssm provides the API client, operations, and parameter types for Amazon Simple Systems Manager (SSM). Amazon Web Services Systems Manager is the operations hub for your Amazon Web Services applications and resources and a secure end-to-end management solution for hybrid cloud environments that enables safe and secure operations at scale. This reference is intended to be used with the Amazon Web Services Systems Manager User Guide. To get started, see Setting up Amazon Web Services Systems Manager. Related resources For information about each of the capabilities that comprise Systems Manager, see Systems Manager capabilitiesin the Amazon Web Services Systems Manager User Guide. For details about predefined runbooks for Automation, a capability of Amazon Web Services Systems Manager, see the Systems Manager Automation runbook reference. For information about AppConfig, a capability of Systems Manager, see the AppConfig User Guide and the AppConfig API Reference. For information about Incident Manager, a capability of Systems Manager, see the Systems Manager Incident Manager User Guideand the Systems Manager Incident Manager API Reference.
Package tcell provides a lower-level, portable API for building programs that interact with terminals or consoles. It works with both common (and many uncommon!) terminals or terminal emulators, and Windows console implementations. It provides support for up to 256 colors, text attributes, and box drawing elements. A database of terminals built from a real terminfo database is provided, along with code to generate new database entries. Tcell offers very rich support for mice, dependent upon the terminal of course. (Windows, XTerm, and iTerm 2 are known to work very well.) If the environment is not Unicode by default, such as an ISO8859 based locale or GB18030, Tcell can convert input and output, so that your terminal can operate in whatever locale is most convenient, while the application program can just assume "everything is UTF-8". Reasonable defaults are used for updating characters to something suitable for display. Unicode box drawing characters will be converted to use the alternate character set of your terminal, if native conversions are not available. If no ACS is available, then some ASCII fallbacks will be used. Note that support for non-UTF-8 locales (other than C) must be enabled by the application using RegisterEncoding() -- we don't have them all enabled by default to avoid bloating the application unneccessarily. (These days UTF-8 is good enough for almost everyone, and nobody should be using legacy locales anymore.) Also, actual glyphs for various code point will only be displayed if your terminal or emulator (or the font the emulator is using) supports them. A rich set of keycodes is supported, with support for up to 65 function keys, and various other special keys.
Package secretsmanager provides the API client, operations, and parameter types for AWS Secrets Manager. Amazon Web Services Secrets Manager provides a service to enable you to store, manage, and retrieve, secrets. This guide provides descriptions of the Secrets Manager API. For more information about using this service, see the Amazon Web Services Secrets Manager User Guide. This version of the Secrets Manager API Reference documents the Secrets Manager API version 2017-10-17. For a list of endpoints, see Amazon Web Services Secrets Manager endpoints. We welcome your feedback. Send your comments to awssecretsmanager-feedback@amazon.com, or post your feedback and questions in the Amazon Web Services Secrets Manager Discussion Forum. For more information about the Amazon Web Services Discussion Forums, see Forums Help. Amazon Web Services Secrets Manager supports Amazon Web Services CloudTrail, a service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. By using information that's collected by Amazon Web Services CloudTrail, you can determine the requests successfully made to Secrets Manager, who made the request, when it was made, and so on. For more about Amazon Web Services Secrets Manager and support for Amazon Web Services CloudTrail, see Logging Amazon Web Services Secrets Manager Events with Amazon Web Services CloudTrailin the Amazon Web Services Secrets Manager User Guide. To learn more about CloudTrail, including enabling it and find your log files, see the Amazon Web Services CloudTrail User Guide.
Package attributevalue provides marshaling and unmarshaling utilities to convert between Go types and Amazon DynamoDB AttributeValues. These utilities allow you to marshal slices, maps, structs, and scalar values to and from AttributeValue type. These utilities make it easier to convert between AttributeValue and Go types when working with DynamoDB resources. This package only converts between Go types and DynamoDB AttributeValue. See the feature/dynamodbstreams/attributevalue package for converting to DynamoDBStreams AttributeValue types. The FromDynamoStreamsDBMap, FromDynamoStreamsDBList, and FromDynamoDBStreams functions provide the conversion utilities to convert a DynamoDBStreams AttributeValue type to a DynamoDB AttributeValue type. Use these utilities when you need to convert the AttributeValue type between the two APIs. To marshal a Go type to an AttributeValue you can use the Marshal, MarshalList, and MarshalMap functions. The List and Map functions are specialized versions of the Marshal for serializing slices and maps of Attributevalues. The following example uses MarshalMap to convert a Go struct, Record to a AttributeValue. The AttributeValue value is then used as input to the PutItem operation call. To unmarshal an AttributeValue to a Go type you can use the Unmarshal, UnmarshalList, UnmarshalMap, and UnmarshalListOfMaps functions. The List and Map functions are specialized versions of the Unmarshal function for unmarshal slices and maps of Attributevalues. The following example will unmarshal Items result from the DynamoDB's Scan API operation. The Items returned will be unmarshaled into the slice of the Records struct. The AttributeValue Marshal and Unmarshal functions support the `dynamodbav` struct tag by default. Additional tags can be enabled with the EncoderOptions and DecoderOptions, TagKey option. See the Marshal and Unmarshal function for information on how struct tags and fields are marshaled and unmarshaled.
Package trillian contains the generated protobuf code for the Trillian API.
Package cloudflare implements the Cloudflare v4 API. File contains helper methods for accepting variants (pointers, values, slices, etc) of a particular type and returning them in another. A common use is pointer to values and back. _Most_ follow the convention of (where <type> is a Golang type such as Bool): <type>Ptr: Accepts a value and returns a pointer. <type>: Accepts a pointer and returns a value. <type>PtrSlice: Accepts a slice of values and returns a slice of pointers. <type>Slice: Accepts a slice of pointers and returns a slice of values. <type>PtrMap: Accepts a string map of values into a string map of pointers. <type>Map: Accepts a string map of pointers into a string map of values. Not all Golang types are covered here, only those that are commonly used.
Package kinesis provides the API client, operations, and parameter types for Amazon Kinesis. Amazon Kinesis Data Streams is a managed service that scales elastically for real-time processing of streaming big data.
Package sigv4authextension implements the `auth.Client` interface. This extension provides the Sigv4 process of adding authentication information to AWS API requests sent by HTTP. As such, the extension can be used for HTTP based exporters that export to AWS services.