Package apis contains Kubernetes API groups.
Package client provides bindings for the etcd APIs. Create a Config and exchange it for a Client: Clients are safe for concurrent use by multiple goroutines. Create a KeysAPI using the Client, then use it to interact with etcd: Use a custom context to set timeouts on your operations:
Package logrus is a structured logger for Go, completely API compatible with the standard library logger. The simplest way to use Logrus is simply the package-level exported logger: Output: For a full guide visit https://github.com/sirupsen/logrus An example on how to use a hook
Package logrus is a structured logger for Go, completely API compatible with the standard library logger. The simplest way to use Logrus is simply the package-level exported logger: Output: For a full guide visit https://github.com/sirupsen/logrus
Package typeparams contains common utilities for writing tools that interact with generic Go code, as introduced with Go 1.18. Many of the types and functions in this package are proxies for the new APIs introduced in the standard library with Go 1.18. For example, the typeparams.Union type is an alias for go/types.Union, and the ForTypeSpec function returns the value of the go/ast.TypeSpec.TypeParams field. At Go versions older than 1.18 these helpers are implemented as stubs, allowing users of this package to write code that handles generic constructs inline, even if the Go version being used to compile does not support generics. Additionally, this package contains common utilities for working with the new generic constructs, to supplement the standard library APIs. Notably, the NormalTerms API computes a minimal representation of the structural restrictions on a type parameter. In the future, these supplemental APIs may be available in the standard library..
A dummy main to help with releasing the kustomize API module.
Package fasthttp provides fast HTTP server and client API. Fasthttp provides the following features: Optimized for speed. Easily handles more than 100K qps and more than 1M concurrent keep-alive connections on modern hardware. Optimized for low memory usage. Easy 'Connection: Upgrade' support via RequestCtx.Hijack. Server provides the following anti-DoS limits: - The number of concurrent connections. - The number of concurrent connections per client IP. - The number of requests per connection. - Request read timeout. - Response write timeout. - Maximum request header size. - Maximum request body size. - Maximum request execution time. - Maximum keep-alive connection lifetime. - Early filtering out non-GET requests. A lot of additional useful info is exposed to request handler: - Server and client address. - Per-request logger. - Unique request id. - Request start time. - Connection start time. - Request sequence number for the current connection. Client supports automatic retry on idempotent requests' failure. Fasthttp API is designed with the ability to extend existing client and server implementations or to write custom client and server implementations from scratch.
Package zap provides fast, structured, leveled logging. For applications that log in the hot path, reflection-based serialization and string formatting are prohibitively expensive - they're CPU-intensive and make many small allocations. Put differently, using json.Marshal and fmt.Fprintf to log tons of interface{} makes your application slow. Zap takes a different approach. It includes a reflection-free, zero-allocation JSON encoder, and the base Logger strives to avoid serialization overhead and allocations wherever possible. By building the high-level SugaredLogger on that foundation, zap lets users choose when they need to count every allocation and when they'd prefer a more familiar, loosely typed API. In contexts where performance is nice, but not critical, use the SugaredLogger. It's 4-10x faster than other structured logging packages and supports both structured and printf-style logging. Like log15 and go-kit, the SugaredLogger's structured logging APIs are loosely typed and accept a variadic number of key-value pairs. (For more advanced use cases, they also accept strongly typed fields - see the SugaredLogger.With documentation for details.) By default, loggers are unbuffered. However, since zap's low-level APIs allow buffering, calling Sync before letting your process exit is a good habit. In the rare contexts where every microsecond and every allocation matter, use the Logger. It's even faster than the SugaredLogger and allocates far less, but it only supports strongly-typed, structured logging. Choosing between the Logger and SugaredLogger doesn't need to be an application-wide decision: converting between the two is simple and inexpensive. The simplest way to build a Logger is to use zap's opinionated presets: NewExample, NewProduction, and NewDevelopment. These presets build a logger with a single function call: Presets are fine for small projects, but larger projects and organizations naturally require a bit more customization. For most users, zap's Config struct strikes the right balance between flexibility and convenience. See the package-level BasicConfiguration example for sample code. More unusual configurations (splitting output between files, sending logs to a message queue, etc.) are possible, but require direct use of go.uber.org/zap/zapcore. See the package-level AdvancedConfiguration example for sample code. The zap package itself is a relatively thin wrapper around the interfaces in go.uber.org/zap/zapcore. Extending zap to support a new encoding (e.g., BSON), a new log sink (e.g., Kafka), or something more exotic (perhaps an exception aggregation service, like Sentry or Rollbar) typically requires implementing the zapcore.Encoder, zapcore.WriteSyncer, or zapcore.Core interfaces. See the zapcore documentation for details. Similarly, package authors can use the high-performance Encoder and Core implementations in the zapcore package to build their own loggers. An FAQ covering everything from installation errors to design decisions is available at https://github.com/uber-go/zap/blob/master/FAQ.md.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. See the credentials package documentation for more information on credential providers included with the SDK, and how to customize the SDK's usage of credentials. https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. See the endpoints package for the AWS Regions and Endpoints metadata. https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/ In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.
Package iris implements the highest realistic performance, easy to learn Go web framework. Iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Low-level handlers compatible with `net/http` and high-level fastest MVC implementation and handlers dependency injection. Easy to learn for new gophers and advanced features for experienced, it goes as far as you dive into it! Source code and other details for the project are available at GitHub: 12.2.11 The only requirement is the Go Programming Language, at least version 1.22. Wiki: Examples: Middleware: Home Page:
Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation. To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases. The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the SyncProducer can still sometimes be lost. To consume messages, use Consumer or Consumer-Group API. For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire; the Client provides higher-level metadata management that is shared between the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry. Broker related metrics: Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics. Producer related metrics: Consumer related metrics:
Package dns implements a full featured interface to the Domain Name System. Both server- and client-side programming is supported. The package allows complete control over what is sent out to the DNS. The API follows the less-is-more principle, by presenting a small, clean interface. It supports (asynchronous) querying/replying, incoming/outgoing zone transfers, TSIG, EDNS0, dynamic updates, notifies and DNSSEC validation/signing. Note that domain names MUST be fully qualified before sending them, unqualified names in a message will result in a packing failure. Resource records are native types. They are not stored in wire format. Basic usage pattern for creating a new resource record: Or directly from a string: Or when the default origin (.) and TTL (3600) and class (IN) suit you: Or even: In the DNS messages are exchanged, these messages contain resource records (sets). Use pattern for creating a message: Or when not certain if the domain name is fully qualified: The message m is now a message with the question section set to ask the MX records for the miek.nl. zone. The following is slightly more verbose, but more flexible: After creating a message it can be sent. Basic use pattern for synchronous querying the DNS at a server configured on 127.0.0.1 and port 53: Suppressing multiple outstanding queries (with the same question, type and class) is as easy as setting: More advanced options are available using a net.Dialer and the corresponding API. For example it is possible to set a timeout, or to specify a source IP address and port to use for the connection: If these "advanced" features are not needed, a simple UDP query can be sent, with: When this functions returns you will get DNS message. A DNS message consists out of four sections. The question section: in.Question, the answer section: in.Answer, the authority section: in.Ns and the additional section: in.Extra. Each of these sections (except the Question section) contain a []RR. Basic use pattern for accessing the rdata of a TXT RR as the first RR in the Answer section: Both domain names and TXT character strings are converted to presentation form both when unpacked and when converted to strings. For TXT character strings, tabs, carriage returns and line feeds will be converted to \t, \r and \n respectively. Back slashes and quotations marks will be escaped. Bytes below 32 and above 127 will be converted to \DDD form. For domain names, in addition to the above rules brackets, periods, spaces, semicolons and the at symbol are escaped. DNSSEC (DNS Security Extension) adds a layer of security to the DNS. It uses public key cryptography to sign resource records. The public keys are stored in DNSKEY records and the signatures in RRSIG records. Requesting DNSSEC information for a zone is done by adding the DO (DNSSEC OK) bit to a request. Signature generation, signature verification and key generation are all supported. Dynamic updates reuses the DNS message format, but renames three of the sections. Question is Zone, Answer is Prerequisite, Authority is Update, only the Additional is not renamed. See RFC 2136 for the gory details. You can set a rather complex set of rules for the existence of absence of certain resource records or names in a zone to specify if resource records should be added or removed. The table from RFC 2136 supplemented with the Go DNS function shows which functions exist to specify the prerequisites. The prerequisite section can also be left empty. If you have decided on the prerequisites you can tell what RRs should be added or deleted. The next table shows the options you have and what functions to call. An TSIG or transaction signature adds a HMAC TSIG record to each message sent. The supported algorithms include: HmacSHA1, HmacSHA256 and HmacSHA512. Basic use pattern when querying with a TSIG name "axfr." (note that these key names must be fully qualified - as they are domain names) and the base64 secret "so6ZGir4GPAqINNh9U5c3A==": If an incoming message contains a TSIG record it MUST be the last record in the additional section (RFC2845 3.2). This means that you should make the call to SetTsig last, right before executing the query. If you make any changes to the RRset after calling SetTsig() the signature will be incorrect. When requesting an zone transfer (almost all TSIG usage is when requesting zone transfers), with TSIG, this is the basic use pattern. In this example we request an AXFR for miek.nl. with TSIG key named "axfr." and secret "so6ZGir4GPAqINNh9U5c3A==" and using the server 176.58.119.54: You can now read the records from the transfer as they come in. Each envelope is checked with TSIG. If something is not correct an error is returned. A custom TSIG implementation can be used. This requires additional code to perform any session establishment and signature generation/verification. The client must be configured with an implementation of the TsigProvider interface: Basic use pattern validating and replying to a message that has TSIG set. RFC 6895 sets aside a range of type codes for private use. This range is 65,280 - 65,534 (0xFF00 - 0xFFFE). When experimenting with new Resource Records these can be used, before requesting an official type code from IANA. See https://miek.nl/2014/september/21/idn-and-private-rr-in-go-dns/ for more information. EDNS0 is an extension mechanism for the DNS defined in RFC 2671 and updated by RFC 6891. It defines a new RR type, the OPT RR, which is then completely abused. Basic use pattern for creating an (empty) OPT RR: The rdata of an OPT RR consists out of a slice of EDNS0 (RFC 6891) interfaces. Currently only a few have been standardized: EDNS0_NSID (RFC 5001) and EDNS0_SUBNET (RFC 7871). Note that these options may be combined in an OPT RR. Basic use pattern for a server to check if (and which) options are set: SIG(0) From RFC 2931: It works like TSIG, except that SIG(0) uses public key cryptography, instead of the shared secret approach in TSIG. Supported algorithms: ECDSAP256SHA256, ECDSAP384SHA384, RSASHA1, RSASHA256 and RSASHA512. Signing subsequent messages in multi-message sessions is not implemented.
Package goes provides an API to access Elasticsearch.
Package goquery implements features similar to jQuery, including the chainable syntax, to manipulate and query an HTML document. It brings a syntax and a set of features similar to jQuery to the Go language. It is based on Go's net/html package and the CSS Selector library cascadia. Since the net/html parser returns nodes, and not a full-featured DOM tree, jQuery's stateful manipulation functions (like height(), css(), detach()) have been left off. Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML. See the repository's wiki for various options on how to do this. Syntax-wise, it is as close as possible to jQuery, with the same method names when possible, and that warm and fuzzy chainable interface. jQuery being the ultra-popular library that it is, writing a similar HTML-manipulating library was better to follow its API than to start anew (in the same spirit as Go's fmt package), even though some of its methods are less than intuitive (looking at you, index()...). It is hosted on GitHub, along with additional documentation in the README.md file: https://github.com/puerkitobio/goquery Please note that because of the net/html dependency, goquery requires Go1.1+. The various methods are split into files based on the category of behavior. The three dots (...) indicate that various "overloads" are available. * array.go : array-like positional manipulation of the selection. * expand.go : methods that expand or augment the selection's set. * filter.go : filtering methods, that reduce the selection's set. * iteration.go : methods to loop over the selection's nodes. * manipulation.go : methods for modifying the document * property.go : methods that inspect and get the node's properties values. * query.go : methods that query, or reflect, a node's identity. * traversal.go : methods to traverse the HTML document tree. * type.go : definition of the types exposed by goquery. * utilities.go : definition of helper functions (and not methods on a *Selection) that are not part of jQuery, but are useful to goquery. This example scrapes the reviews shown on the home page of metalsucks.net.
Package cloud is the root of the packages used to access Google Cloud Services. See https://pkg.go.dev/cloud.google.com/go for a full list of sub-modules. All clients in sub-packages are configurable via client options. These options are described here: https://pkg.go.dev/google.golang.org/api/option. Endpoint configuration is used to specify the URL to which requests are sent. It is used for services that support or require regional endpoints, as well as for other use cases such as testing against fake servers. For example, the Vertex AI service recommends that you configure the endpoint to the location with the features you want that is closest to your physical location or the location of your users. There is no global endpoint for Vertex AI. See Vertex AI - Locations for more details. The following example demonstrates configuring a Vertex AI client with a regional endpoint: All of the clients support authentication via Google Application Default Credentials, or by providing a JSON key file for a Service Account. See examples below. Google Application Default Credentials (ADC) is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://cloud.google.com/docs/authentication/production. If you have your environment configured correctly you will not need to pass any extra information to the client libraries. Here is an example of a client using ADC to authenticate: You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.cloud.google.com/iam-admin/serviceaccounts. This example uses the Secret Manger client, but the same steps apply to the all other client libraries this package as well. Example: In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. This example uses the Secret Manager client, but the same steps apply to all other client libraries as well. Note that scopes can be found at https://developers.google.com/identity/protocols/oauth2/scopes, and are also provided in all auto-generated libraries: for example, cloud.google.com/go/secretmanager/apiv1 provides DefaultAuthScopes. Example: By default, non-streaming methods, like Create or Get, will have a default deadline applied to the context provided at call time, unless a context deadline is already set. Streaming methods have no default deadline and will run indefinitely. To set timeouts or arrange for cancellation, use context. Transient errors will be retried when correctness allows. Here is an example of setting a timeout for an RPC using context.WithTimeout: Here is an example of setting a timeout for an RPC using github.com/googleapis/gax-go/v2.WithTimeout: Here is an example of how to arrange for an RPC to be canceled, use context.WithCancel: Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Regardless of which transport is used, request headers can be set in the same way using [`callctx.SetHeaders`]setheaders. Here is a generic example: ## Google-reserved headers There are a some header keys that Google reserves for internal use that must not be ovewritten. The following header keys are broadly considered reserved and should not be conveyed by client library users unless instructed to do so: * `x-goog-api-client` * `x-goog-request-params` Be sure to check the individual package documentation for other service-specific reserved headers. For example, Storage supports a specific auditing header that is mentioned in that [module's documentation]storagedocs. ## Google Cloud system parameters Google Cloud services respect system parameterssystem parameters that can be used to augment request and/or response behavior. For the most part, they are not needed when using one of the enclosed client libraries. However, those that may be necessary are made available via the [`callctx`]callctx package. If not present there, consider opening an issue on that repo to request a new constant. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport by default. For gRPC clients, connection pooling is configurable. Users of Cloud Client Libraries may specify option.WithGRPCConnectionPool(n) as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and accessed in a round robin fashion. Minimal container images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/googleapis/google-cloud-go/issues/928 for more information. For tips on how to write tests against code that calls into our libraries check out our Debugging Guide. For tips on how to write tests against code that calls into our libraries check out our Testing Guide. Most of the errors returned by the generated clients are wrapped in an github.com/googleapis/gax-go/v2/apierror.APIError and can be further unwrapped into a google.golang.org/grpc/status.Status or google.golang.org/api/googleapi.Error depending on the transport used to make the call (gRPC or REST). Converting your errors to these types can be a useful way to get more information about what went wrong while debugging. APIError gives access to specific details in the error. The transport-specific errors can still be unwrapped using the APIError. If the gRPC transport was used, the google.golang.org/grpc/status.Status can still be parsed using the google.golang.org/grpc/status.FromError function. Semver is used to communicate stability of the sub-modules of this package. Note, some stable sub-modules do contain packages, and sometimes features, that are considered unstable. If something is unstable it will be explicitly labeled as such. Example of package does in an unstable package: Clients that contain alpha and beta in their import path may change or go away without notice. Clients marked stable will maintain compatibility with future versions for as long as we can reasonably sustain. Incompatible changes might be made in some situations, including:
Package api is the root of the packages used to access Google Cloud Services. See https://godoc.org/google.golang.org/api for a full list of sub-packages. Within api there exist numerous clients which connect to Google APIs, and various utility packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in https://godoc.org/google.golang.org/api/transport for more details. Due to the auto-generated nature of this collection of libraries, complete APIs or specific versions can appear or go away without notice. As a result, you should always locally vendor any API(s) that your code relies upon. Google APIs follow semver as specified by https://cloud.google.com/apis/design/versioning. The code generator and the code it produces - the libraries in the google.golang.org/api/... subpackages - are beta. Note that versioning and stability is strictly not communicated through Go modules. Go modules are used only for dependency management. Many parameters are specified using ints. However, underlying APIs might operate on a finer granularity, expecting int64, int32, uint64, or uint32, all of whom have different maximum values. Subsequently, specifying an int parameter in one of these clients may result in an error from the API because the value is too large. To see the exact type of int that the API expects, you can inspect the API's discovery doc. A global catalogue pointing to the discovery doc of APIs can be found at https://www.googleapis.com/discovery/v1/apis. This field can be found on all Request/Response structs in the generated clients. All of these types have the JSON `omitempty` field tag present on their fields. This means if a type is set to its default value it will not be marshalled. Sometimes you may actually want to send a default value, for instance sending an int of `0`. In this case you can override the `omitempty` feature by adding the field name to the `ForceSendFields` slice. See docs on any struct for more details. This may be used to include empty fields in Patch requests. This field can be found on all Request/Response structs in the generated clients. It can be be used to send JSON null values for the listed fields. By default, fields with empty values are omitted from API requests because of the presence of the `omitempty` field tag on all fields. However, any field with an empty value appearing in NullFields will be sent to the server as null. It is an error if a field in this list has a non-empty value. This may be used to include null fields in Patch requests. An error returned by a client's Do method may be cast to a *googleapi.Error or unwrapped to an *apierror.APIError. The https://pkg.go.dev/google.golang.org/api/googleapi#Error type is useful for getting the HTTP status code: The https://pkg.go.dev/github.com/googleapis/gax-go/v2/apierror#APIError type is useful for inspecting structured details of the underlying API response, such as the reason for the error and the error domain, which is typically the registered service name of the tool or product that generated the error: If an API call returns an Operation, that means it could take some time to complete the work initiated by the API call. Applications that are interested in the end result of the operation they initiated should wait until the Operation.Done field indicates it is finished. To do this, use the service's Operation client, and a loop, like so:
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a Client: The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. You may configure the client by passing in options from the google.golang.org/api/option package. You may also use options defined in this package, such as WithJSONReads. If you only wish to access public data, you can create an unauthenticated client with To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual: Please note that there is no official emulator for Cloud Storage. A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use BucketHandle.Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with ObjectHandle.Attrs: Listing objects in a bucket is done with the BucketHandle.Objects method: Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used: If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs. To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, use BucketHandle.SignedURL. You can also sign a URL without creating a client. See the documentation of SignedURL for details. A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user. For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4. If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL and BucketHandle.GenerateSignedPostPolicyV4 if any of the following are true: Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient. In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account. To generate the signature, you must have: Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example: Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation. The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docs for more information. Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example: You can add custom headers to any API call made by this package by using callctx.SetHeaders on the context which is passed to the method. For example, to add a custom audit logging header: This package includes support for the Cloud Storage gRPC API. The implementation uses gRPC rather than the Default JSON & XML APIs to make requests to Cloud Storage. The Go Storage gRPC client is generally available. The Notifications, Serivce Account HMAC and GetServiceAccount RPCs are not supported through the gRPC client. To create a client which will use gRPC, use the alternate constructor: Using the gRPC API inside GCP with a bucket in the same region can allow for Direct Connectivity (enabling requests to skip some proxy steps and reducing response latency). A warning is emmitted if gRPC is not used within GCP to warn that Direct Connectivity could not be initialized. Direct Connectivity is not required to access the gRPC API. Dependencies for the gRPC API may slightly increase the size of binaries for applications depending on this package. If you are not using gRPC, you can use the build tag `disable_grpc_modules` to opt out of these dependencies and reduce the binary size. The gRPC client emits metrics by default and will export the gRPC telemetry discussed in gRFC/66 and gRFC/78 to Google Cloud Monitoring. The metrics are accessible through Cloud Monitoring API and you incur no additional cost for publishing the metrics. Google Cloud Support can use this information to more quickly diagnose problems related to GCS and gRPC. Sending this data does not incur any billing charges, and requires minimal CPU (a single RPC every minute) or memory (a few KiB to batch the telemetry). To access the metrics you can view them through Cloud Monitoring metric explorer with the prefix `storage.googleapis.com/client`. Metrics are emitted every minute. You can disable metrics using the following example when creating a new gRPC client using WithDisabledClientMetrics. The metrics exporter uses Cloud Monitoring API which determines project ID and credentials doing the following: * Project ID is determined using OTel Resource Detector for the environment otherwise it falls back to the project provided by google.FindCredentials. * Credentials are determined using Application Default Credentials. The principal must have `roles/monitoring.metricWriter` role granted. If not a logged warning will be emitted. Subsequent are silenced to prevent noisy logs. Certain control plane and long-running operations for Cloud Storage (including Folder and Managed Folder operations) are supported via the autogenerated Storage Control client, which is available as a subpackage in this module. See package docs at cloud.google.com/go/storage/control/apiv2 or reference the Storage Control API docs.
Package metadata provides access to Google Compute Engine (GCE) metadata and API service accounts. This package is a wrapper around the GCE metadata service, as documented at https://cloud.google.com/compute/docs/metadata/overview.
Package otel provides global access to the OpenTelemetry API. The subpackages of the otel package provide an implementation of the OpenTelemetry API. The provided API is used to instrument code and measure data about that code's performance and operation. The measured data, by default, is not processed or transmitted anywhere. An implementation of the OpenTelemetry SDK, like the default SDK implementation (go.opentelemetry.io/otel/sdk), and associated exporters are used to process and transport this data. To read the getting started guide, see https://opentelemetry.io/docs/languages/go/getting-started/. To read more about tracing, see go.opentelemetry.io/otel/trace. To read more about metrics, see go.opentelemetry.io/otel/metric. To read more about logs, see go.opentelemetry.io/otel/log. To read more about propagation, see go.opentelemetry.io/otel/propagation and go.opentelemetry.io/otel/baggage.
Package trace provides an implementation of the tracing part of the OpenTelemetry API. To participate in distributed traces a Span needs to be created for the operation being performed as part of a traced workflow. In its simplest form: A Tracer is unique to the instrumentation and is used to create Spans. Instrumentation should be designed to accept a TracerProvider from which it can create its own unique Tracer. Alternatively, the registered global TracerProvider from the go.opentelemetry.io/otel package can be used as a default. This package does not conform to the standard Go versioning policy; all of its interfaces may have methods added to them without a package major version bump. This non-standard API evolution could surprise an uninformed implementation author. They could unknowingly build their implementation in a way that would result in a runtime panic for their users that update to the new API. The API is designed to help inform an instrumentation author about this non-standard API evolution. It requires them to choose a default behavior for unimplemented interface methods. There are three behavior choices they can make: All interfaces in this API embed a corresponding interface from go.opentelemetry.io/otel/trace/embedded. If an author wants the default behavior of their implementations to be a compilation failure, signaling to their users they need to update to the latest version of that implementation, they need to embed the corresponding interface from go.opentelemetry.io/otel/trace/embedded in their implementation. For example, If an author wants the default behavior of their implementations to panic, they can embed the API interface directly. This option is not recommended. It will lead to publishing packages that contain runtime panics when users update to newer versions of go.opentelemetry.io/otel/trace, which may be done with a transitive dependency. Finally, an author can embed another implementation in theirs. The embedded implementation will be used for methods not defined by the author. For example, an author who wants to default to silently dropping the call can use go.opentelemetry.io/otel/trace/noop: It is strongly recommended that authors only embed go.opentelemetry.io/otel/trace/noop if they choose this default behavior. That implementation is the only one OpenTelemetry authors can guarantee will fully implement all the API interfaces when a user updates their API.
Package datastore provides a client for Google Cloud Datastore. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Entities are the unit of storage and are associated with a key. A key consists of an optional parent key, a string application ID, a string kind (also known as an entity type), and either a StringID or an IntID. A StringID is also known as an entity name or key name. It is valid to create a key with a zero StringID and a zero IntID; this is called an incomplete key, and does not refer to any saved entity. Putting an entity into the datastore under an incomplete key will cause a unique key to be generated for that entity, with a non-zero IntID. An entity's contents are a mapping from case-sensitive field names to values. Valid value types are: Slices of structs are valid, as are structs that contain slices. The Get and Put functions load and save an entity's contents. An entity's contents are typically represented by a struct pointer. Example code: GetMulti, PutMulti and DeleteMulti are batch versions of the Get, Put and Delete functions. They take a []*Key instead of a *Key, and may return a datastore.MultiError when encountering partial failure. Mutate generalizes PutMulti and DeleteMulti to a sequence of any Datastore mutations. It takes a series of mutations created with NewInsert, NewUpdate, NewUpsert and NewDelete and applies them atomically. An entity's contents can be represented by a variety of types. These are typically struct pointers, but can also be any type that implements the PropertyLoadSaver interface. If using a struct pointer, you do not have to explicitly implement the PropertyLoadSaver interface; the datastore will automatically convert via reflection. If a struct pointer does implement PropertyLoadSaver then those methods will be used in preference to the default behavior for struct pointers. Struct pointers are more strongly typed and are easier to use; PropertyLoadSavers are more flexible. The actual types passed do not have to match between Get and Put calls or even across different calls to datastore. It is valid to put a *PropertyList and get that same entity as a *myStruct, or put a *myStruct0 and get a *myStruct1. Conceptually, any entity is saved as a sequence of properties, and is loaded into the destination value on a property-by-property basis. When loading into a struct pointer, an entity that cannot be completely represented (such as a missing field) will result in an ErrFieldMismatch error but it is up to the caller whether this error is fatal, recoverable or ignorable. By default, for struct pointers, all properties are potentially indexed, and the property name is the same as the field name (and hence must start with an upper case letter). Fields may have a `datastore:"name,options"` tag. The tag name is the property name, which must be one or more valid Go identifiers joined by ".", but may start with a lower case letter. An empty tag name means to just use the field name. A "-" tag name means that the datastore will ignore that field. The only valid options are "omitempty", "noindex" and "flatten". If the options include "omitempty" and the value of the field is a zero value, then the field will be omitted on Save. Zero values are best defined in the golang spec (https://golang.org/ref/spec#The_zero_value). Struct field values will never be empty, except for nil pointers. If options include "noindex" then the field will not be indexed. All fields are indexed by default. Strings or byte slices longer than 1500 bytes cannot be indexed; fields used to store long strings and byte slices must be tagged with "noindex" or they will cause Put operations to fail. For a nested struct field, the options may also include "flatten". This indicates that the immediate fields and any nested substruct fields of the nested struct should be flattened. See below for examples. To use multiple options together, separate them by a comma. The order does not matter. If the options is "" then the comma may be omitted. Example code: A field of slice type corresponds to a Datastore array property, except for []byte, which corresponds to a Datastore blob. Zero-length slice fields are not saved. Slice fields of length 1 or greater are saved as Datastore arrays. When a zero-length Datastore array is loaded into a slice field, the slice field remains unchanged. If a non-array value is loaded into a slice field, the result will be a slice with one element, containing the value. Loading a Datastore Null into a basic type (int, float, etc.) results in a zero value. Loading a Null into a slice of basic type results in a slice of size 1 containing the zero value. Loading a Null into a pointer field results in nil. Loading a Null into a field of struct type is an error. A struct field can be a pointer to a signed integer, floating-point number, string or bool. Putting a non-nil pointer will store its dereferenced value. Putting a nil pointer will store a Datastore Null property, unless the field is marked omitempty, in which case no property will be stored. Loading a Null into a pointer field sets the pointer to nil. Loading any other value allocates new storage with the value, and sets the field to point to it. If the struct contains a *datastore.Key field tagged with the name "__key__", its value will be ignored on Put. When reading the Entity back into the Go struct, the field will be populated with the *datastore.Key value used to query for the Entity. Example code: If the struct pointed to contains other structs, then the nested or embedded structs are themselves saved as Entity values. For example, given these definitions: then an Outer would have one property, Inner, encoded as an Entity value. If an outer struct is tagged "noindex" then all of its implicit flattened fields are effectively "noindex". If the Inner struct contains a *Key field with the name "__key__", like so: then the value of K will be used as the Key for Inner, represented as an Entity value in datastore. If any nested struct fields should be flattened, instead of encoded as Entity values, the nested struct field should be tagged with the "flatten" option. For example, given the following: an Outer's properties would be equivalent to those of: Note that the "flatten" option cannot be used for Entity value fields. The server will reject any dotted field names for an Entity value. An entity's contents can also be represented by any type that implements the PropertyLoadSaver interface. This type may be a struct pointer, but it does not have to be. The datastore package will call Load when getting the entity's contents, and Save when putting the entity's contents. Possible uses include deriving non-stored fields, verifying fields, or indexing a field only if its value is positive. Example code: The *PropertyList type implements PropertyLoadSaver, and can therefore hold an arbitrary entity's contents. If a type implements the PropertyLoadSaver interface, it may also want to implement the KeyLoader interface. The KeyLoader interface exists to allow implementations of PropertyLoadSaver to also load an Entity's Key into the Go type. This type may be a struct pointer, but it does not have to be. The datastore package will call LoadKey when getting the entity's contents, after calling Load. Example code: To load a Key into a struct which does not implement the PropertyLoadSaver interface, see the "Key Field" section above. Queries retrieve entities based on their properties or key's ancestry. Running a query yields an iterator of results: either keys or (key, entity) pairs. Queries are re-usable and it is safe to call Query.Run from concurrent goroutines. Iterators are not safe for concurrent use. Queries are immutable, and are either created by calling NewQuery, or derived from an existing query by calling a method like Filter or Order that returns a new query value. A query is typically constructed by calling NewQuery followed by a chain of zero or more such methods. These methods are: Example code: Client.RunInTransaction runs a function in a transaction. Example code: Pass the ReadOnly option to RunInTransaction if your transaction is used only for Get, GetMulti or queries. Read-only transactions are more efficient. This package supports the Cloud Datastore emulator, which is useful for testing and development. Environment variables are used to indicate that datastore traffic should be directed to the emulator instead of the production Datastore service. To install and set up the emulator and its environment variables, see the documentation at https://cloud.google.com/datastore/docs/tools/datastore-emulator.
Package bigquery provides a client for the BigQuery service. Note: This package is in beta. Some backwards-incompatible changes may occur. The following assumes a basic familiarity with BigQuery concepts. See https://cloud.google.com/bigquery/docs. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client: To query existing tables, create a Query and call its Read method: Then iterate through the resulting rows. You can store a row using anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value. A slice is simplest: You can also use a struct whose exported fields match the query: You can also start the query running and get the results later. Create the query as above, but call Run instead of Read. This returns a Job, which represents an asynchronous operation. Get the job's ID, a printable string. You can save this string to retrieve the results at a later time, even in another process. To retrieve the job's results from the ID, first look up the Job: Use the Job.Read method to obtain an iterator, and loop over the rows. Query.Read is just a convenience method that combines Query.Run and Job.Read. You can refer to datasets in the client's project with the Dataset method, and in other projects with the DatasetInProject method: These methods create references to datasets, not the datasets themselves. You can have a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to create a dataset from a reference: You can refer to tables with Dataset.Table. Like bigquery.Dataset, bigquery.Table is a reference to an object in BigQuery that may or may not exist. You can create, delete and update the metadata of tables with methods on Table. For instance, you could create a temporary table with: We'll see how to create a table with a schema in the next section. There are two ways to construct schemas with this package. You can build a schema by hand, like so: Or you can infer the schema from a struct: Struct inference supports tags like those of the encoding/json package, so you can change names, ignore fields, or mark a field as nullable (non-required). Fields declared as one of the Null types (NullInt64, NullFloat64, NullString, NullBool, NullTimestamp, NullDate, NullTime and NullDateTime) are automatically inferred as nullable, so the "nullable" tag is only needed for []byte, *big.Rat and pointer-to-struct fields. Having constructed a schema, you can create a table with it like so: You can copy one or more tables to another table. Begin by constructing a Copier describing the copy. Then set any desired copy options, and finally call Run to get a Job: You can chain the call to Run if you don't want to set options: You can wait for your job to complete: Job.Wait polls with exponential backoff. You can also poll yourself, if you wish: There are two ways to populate a table with this package: load the data from a Google Cloud Storage object, or upload rows directly from your program. For loading, first create a GCSReference, configuring it if desired. Then make a Loader, optionally configure it as well, and call its Run method. To upload, first define a type that implements the ValueSaver interface, which has a single method named Save. Then create an Uploader, and call its Put method with a slice of values. You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type to specify the schema and insert ID by hand, or just supply the struct or struct pointer directly and the schema will be inferred: If you've been following so far, extracting data from a BigQuery table into a Google Cloud Storage object will feel familiar. First create an Extractor, then optionally configure it, and lastly call its Run method. Errors returned by this client are often of the type [`googleapi.Error`](https://godoc.org/google.golang.org/api/googleapi#Error). These errors can be introspected for more information by type asserting to the richer `googleapi.Error` type. For example:
Package metric provides the OpenTelemetry API used to measure metrics about source code operation. This API is separate from its implementation so the instrumentation built from it is reusable. See go.opentelemetry.io/otel/sdk/metric for the official OpenTelemetry implementation of this API. All measurements made with this package are made via instruments. These instruments are created by a Meter which itself is created by a MeterProvider. Applications need to accept a MeterProvider implementation as a starting point when instrumenting. This can be done directly, or by using the OpenTelemetry global MeterProvider via GetMeterProvider. Using an appropriately named Meter from the accepted MeterProvider, instrumentation can then be built from the Meter's instruments. Each instrument is designed to make measurements of a particular type. Broadly, all instruments fall into two overlapping logical categories: asynchronous or synchronous, and int64 or float64. All synchronous instruments (Int64Counter, Int64UpDownCounter, Int64Histogram, Float64Counter, Float64UpDownCounter, and Float64Histogram) are used to measure the operation and performance of source code during the source code execution. These instruments only make measurements when the source code they instrument is run. All asynchronous instruments (Int64ObservableCounter, Int64ObservableUpDownCounter, Int64ObservableGauge, Float64ObservableCounter, Float64ObservableUpDownCounter, and Float64ObservableGauge) are used to measure metrics outside of the execution of source code. They are said to make "observations" via a callback function called once every measurement collection cycle. Each instrument is also grouped by the value type it measures. Either int64 or float64. The value being measured will dictate which instrument in these categories to use. Outside of these two broad categories, instruments are described by the function they are designed to serve. All Counters (Int64Counter, Float64Counter, Int64ObservableCounter, and Float64ObservableCounter) are designed to measure values that never decrease in value, but instead only incrementally increase in value. UpDownCounters (Int64UpDownCounter, Float64UpDownCounter, Int64ObservableUpDownCounter, and Float64ObservableUpDownCounter) on the other hand, are designed to measure values that can increase and decrease. When more information needs to be conveyed about all the synchronous measurements made during a collection cycle, a Histogram (Int64Histogram and Float64Histogram) should be used. Finally, when just the most recent measurement needs to be conveyed about an asynchronous measurement, a Gauge (Int64ObservableGauge and Float64ObservableGauge) should be used. See the OpenTelemetry documentation for more information about instruments and their intended use. OpenTelemetry defines an instrument name syntax that restricts what instrument names are allowed. Instrument names should ... To ensure compatibility with observability platforms, all instruments created need to conform to this syntax. Not all implementations of the API will validate these names, it is the callers responsibility to ensure compliance. Measurements are made by recording values and information about the values with an instrument. How these measurements are recorded depends on the instrument. Measurements for synchronous instruments (Int64Counter, Int64UpDownCounter, Int64Histogram, Float64Counter, Float64UpDownCounter, and Float64Histogram) are recorded using the instrument methods directly. All counter instruments have an Add method that is used to measure an increment value, and all histogram instruments have a Record method to measure a data point. Asynchronous instruments (Int64ObservableCounter, Int64ObservableUpDownCounter, Int64ObservableGauge, Float64ObservableCounter, Float64ObservableUpDownCounter, and Float64ObservableGauge) record measurements within a callback function. The callback is registered with the Meter which ensures the callback is called once per collection cycle. A callback can be registered two ways: during the instrument's creation using an option, or later using the RegisterCallback method of the Meter that created the instrument. If the following criteria are met, an option (WithInt64Callback or WithFloat64Callback) can be used during the asynchronous instrument's creation to register a callback (Int64Callback or Float64Callback, respectively): If the criteria are not met, use the RegisterCallback method of the Meter that created the instrument to register a Callback. This package does not conform to the standard Go versioning policy, all of its interfaces may have methods added to them without a package major version bump. This non-standard API evolution could surprise an uninformed implementation author. They could unknowingly build their implementation in a way that would result in a runtime panic for their users that update to the new API. The API is designed to help inform an instrumentation author about this non-standard API evolution. It requires them to choose a default behavior for unimplemented interface methods. There are three behavior choices they can make: All interfaces in this API embed a corresponding interface from go.opentelemetry.io/otel/metric/embedded. If an author wants the default behavior of their implementations to be a compilation failure, signaling to their users they need to update to the latest version of that implementation, they need to embed the corresponding interface from go.opentelemetry.io/otel/metric/embedded in their implementation. For example, If an author wants the default behavior of their implementations to a panic, they need to embed the API interface directly. This is not a recommended behavior as it could lead to publishing packages that contain runtime panics when users update other package that use newer versions of go.opentelemetry.io/otel/metric. Finally, an author can embed another implementation in theirs. The embedded implementation will be used for methods not defined by the author. For example, an author who wants to default to silently dropping the call can use go.opentelemetry.io/otel/metric/noop: It is strongly recommended that authors only embed go.opentelemetry.io/otel/metric/noop if they choose this default behavior. That implementation is the only one OpenTelemetry authors can guarantee will fully implement all the API interfaces when a user updates their API.