Package oauth2 provides support for making OAuth2 authorized and authenticated HTTP requests, as specified in RFC 6749. It can additionally grant authorization with Bearer JWT. Copyright 2023 The Go Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). A very basic middleware which logs the URI of the request being handled could be written as: Middlewares can be added to a router using `Router.Use()`: A more complex authentication middleware, which maps session token to users, could be written as: Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. See the credentials package documentation for more information on credential providers included with the SDK, and how to customize the SDK's usage of credentials. https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. See the endpoints package for the AWS Regions and Endpoints metadata. https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/ In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at https://godoc.org/github.com/lib/pq/example/listen. If you need support for Kerberos authentication, add the following to your main package: This package is in a separate module so that users who don't need Kerberos don't have to download unnecessary dependencies. When imported, additional connection string parameters are supported:
Package cloud is the root of the packages used to access Google Cloud Services. See https://pkg.go.dev/cloud.google.com/go for a full list of sub-modules. All clients in sub-packages are configurable via client options. These options are described here: https://pkg.go.dev/google.golang.org/api/option. Endpoint configuration is used to specify the URL to which requests are sent. It is used for services that support or require regional endpoints, as well as for other use cases such as testing against fake servers. For example, the Vertex AI service recommends that you configure the endpoint to the location with the features you want that is closest to your physical location or the location of your users. There is no global endpoint for Vertex AI. See Vertex AI - Locations for more details. The following example demonstrates configuring a Vertex AI client with a regional endpoint: All of the clients support authentication via Google Application Default Credentials, or by providing a JSON key file for a Service Account. See examples below. Google Application Default Credentials (ADC) is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://cloud.google.com/docs/authentication/production. If you have your environment configured correctly you will not need to pass any extra information to the client libraries. Here is an example of a client using ADC to authenticate: You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.cloud.google.com/iam-admin/serviceaccounts. This example uses the Secret Manger client, but the same steps apply to the all other client libraries this package as well. Example: In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. This example uses the Secret Manager client, but the same steps apply to all other client libraries as well. Note that scopes can be found at https://developers.google.com/identity/protocols/oauth2/scopes, and are also provided in all auto-generated libraries: for example, cloud.google.com/go/secretmanager/apiv1 provides DefaultAuthScopes. Example: By default, non-streaming methods, like Create or Get, will have a default deadline applied to the context provided at call time, unless a context deadline is already set. Streaming methods have no default deadline and will run indefinitely. To set timeouts or arrange for cancellation, use context. Transient errors will be retried when correctness allows. Here is an example of setting a timeout for an RPC using context.WithTimeout: Here is an example of setting a timeout for an RPC using github.com/googleapis/gax-go/v2.WithTimeout: Here is an example of how to arrange for an RPC to be canceled, use context.WithCancel: Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Regardless of which transport is used, request headers can be set in the same way using [`callctx.SetHeaders`]setheaders. Here is a generic example: ## Google-reserved headers There are a some header keys that Google reserves for internal use that must not be ovewritten. The following header keys are broadly considered reserved and should not be conveyed by client library users unless instructed to do so: * `x-goog-api-client` * `x-goog-request-params` Be sure to check the individual package documentation for other service-specific reserved headers. For example, Storage supports a specific auditing header that is mentioned in that [module's documentation]storagedocs. ## Google Cloud system parameters Google Cloud services respect system parameterssystem parameters that can be used to augment request and/or response behavior. For the most part, they are not needed when using one of the enclosed client libraries. However, those that may be necessary are made available via the [`callctx`]callctx package. If not present there, consider opening an issue on that repo to request a new constant. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport by default. For gRPC clients, connection pooling is configurable. Users of Cloud Client Libraries may specify option.WithGRPCConnectionPool(n) as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and accessed in a round robin fashion. Minimal container images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/googleapis/google-cloud-go/issues/928 for more information. For tips on how to write tests against code that calls into our libraries check out our Debugging Guide. For tips on how to write tests against code that calls into our libraries check out our Testing Guide. Most of the errors returned by the generated clients are wrapped in an github.com/googleapis/gax-go/v2/apierror.APIError and can be further unwrapped into a google.golang.org/grpc/status.Status or google.golang.org/api/googleapi.Error depending on the transport used to make the call (gRPC or REST). Converting your errors to these types can be a useful way to get more information about what went wrong while debugging. APIError gives access to specific details in the error. The transport-specific errors can still be unwrapped using the APIError. If the gRPC transport was used, the google.golang.org/grpc/status.Status can still be parsed using the google.golang.org/grpc/status.FromError function. Semver is used to communicate stability of the sub-modules of this package. Note, some stable sub-modules do contain packages, and sometimes features, that are considered unstable. If something is unstable it will be explicitly labeled as such. Example of package does in an unstable package: Clients that contain alpha and beta in their import path may change or go away without notice. Clients marked stable will maintain compatibility with future versions for as long as we can reasonably sustain. Incompatible changes might be made in some situations, including:
Package api is the root of the packages used to access Google Cloud Services. See https://godoc.org/google.golang.org/api for a full list of sub-packages. Within api there exist numerous clients which connect to Google APIs, and various utility packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in https://godoc.org/google.golang.org/api/transport for more details. Due to the auto-generated nature of this collection of libraries, complete APIs or specific versions can appear or go away without notice. As a result, you should always locally vendor any API(s) that your code relies upon. Google APIs follow semver as specified by https://cloud.google.com/apis/design/versioning. The code generator and the code it produces - the libraries in the google.golang.org/api/... subpackages - are beta. Note that versioning and stability is strictly not communicated through Go modules. Go modules are used only for dependency management. Many parameters are specified using ints. However, underlying APIs might operate on a finer granularity, expecting int64, int32, uint64, or uint32, all of whom have different maximum values. Subsequently, specifying an int parameter in one of these clients may result in an error from the API because the value is too large. To see the exact type of int that the API expects, you can inspect the API's discovery doc. A global catalogue pointing to the discovery doc of APIs can be found at https://www.googleapis.com/discovery/v1/apis. This field can be found on all Request/Response structs in the generated clients. All of these types have the JSON `omitempty` field tag present on their fields. This means if a type is set to its default value it will not be marshalled. Sometimes you may actually want to send a default value, for instance sending an int of `0`. In this case you can override the `omitempty` feature by adding the field name to the `ForceSendFields` slice. See docs on any struct for more details. This may be used to include empty fields in Patch requests. This field can be found on all Request/Response structs in the generated clients. It can be be used to send JSON null values for the listed fields. By default, fields with empty values are omitted from API requests because of the presence of the `omitempty` field tag on all fields. However, any field with an empty value appearing in NullFields will be sent to the server as null. It is an error if a field in this list has a non-empty value. This may be used to include null fields in Patch requests. An error returned by a client's Do method may be cast to a *googleapi.Error or unwrapped to an *apierror.APIError. The https://pkg.go.dev/google.golang.org/api/googleapi#Error type is useful for getting the HTTP status code: The https://pkg.go.dev/github.com/googleapis/gax-go/v2/apierror#APIError type is useful for inspecting structured details of the underlying API response, such as the reason for the error and the error domain, which is typically the registered service name of the tool or product that generated the error: If an API call returns an Operation, that means it could take some time to complete the work initiated by the API call. Applications that are interested in the end result of the operation they initiated should wait until the Operation.Done field indicates it is finished. To do this, use the service's Operation client, and a loop, like so:
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a Client: The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. You may configure the client by passing in options from the google.golang.org/api/option package. You may also use options defined in this package, such as WithJSONReads. If you only wish to access public data, you can create an unauthenticated client with To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual: Please note that there is no official emulator for Cloud Storage. A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use BucketHandle.Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with ObjectHandle.Attrs: Listing objects in a bucket is done with the BucketHandle.Objects method: Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used: If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs. To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, use BucketHandle.SignedURL. You can also sign a URL without creating a client. See the documentation of SignedURL for details. A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user. For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4. If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL and BucketHandle.GenerateSignedPostPolicyV4 if any of the following are true: Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient. In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account. To generate the signature, you must have: Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example: Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation. The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docs for more information. Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example: You can add custom headers to any API call made by this package by using callctx.SetHeaders on the context which is passed to the method. For example, to add a custom audit logging header: This package includes support for the Cloud Storage gRPC API. The implementation uses gRPC rather than the Default JSON & XML APIs to make requests to Cloud Storage. The Go Storage gRPC client is generally available. The Notifications, Serivce Account HMAC and GetServiceAccount RPCs are not supported through the gRPC client. To create a client which will use gRPC, use the alternate constructor: Using the gRPC API inside GCP with a bucket in the same region can allow for Direct Connectivity (enabling requests to skip some proxy steps and reducing response latency). A warning is emmitted if gRPC is not used within GCP to warn that Direct Connectivity could not be initialized. Direct Connectivity is not required to access the gRPC API. Dependencies for the gRPC API may slightly increase the size of binaries for applications depending on this package. If you are not using gRPC, you can use the build tag `disable_grpc_modules` to opt out of these dependencies and reduce the binary size. The gRPC client emits metrics by default and will export the gRPC telemetry discussed in gRFC/66 and gRFC/78 to Google Cloud Monitoring. The metrics are accessible through Cloud Monitoring API and you incur no additional cost for publishing the metrics. Google Cloud Support can use this information to more quickly diagnose problems related to GCS and gRPC. Sending this data does not incur any billing charges, and requires minimal CPU (a single RPC every minute) or memory (a few KiB to batch the telemetry). To access the metrics you can view them through Cloud Monitoring metric explorer with the prefix `storage.googleapis.com/client`. Metrics are emitted every minute. You can disable metrics using the following example when creating a new gRPC client using WithDisabledClientMetrics. The metrics exporter uses Cloud Monitoring API which determines project ID and credentials doing the following: * Project ID is determined using OTel Resource Detector for the environment otherwise it falls back to the project provided by google.FindCredentials. * Credentials are determined using Application Default Credentials. The principal must have `roles/monitoring.metricWriter` role granted. If not a logged warning will be emitted. Subsequent are silenced to prevent noisy logs. Certain control plane and long-running operations for Cloud Storage (including Folder and Managed Folder operations) are supported via the autogenerated Storage Control client, which is available as a subpackage in this module. See package docs at cloud.google.com/go/storage/control/apiv2 or reference the Storage Control API docs.
Package pubsub provides an easy way to publish and receive Google Cloud Pub/Sub messages, hiding the details of the underlying server RPCs. Pub/Sub is a many-to-many, asynchronous messaging system that decouples senders and receivers. More information about Pub/Sub is available at https://cloud.google.com/pubsub/docs See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Pub/Sub messages are published to topics. A Topic may be created using Client.CreateTopic like so: Messages may then be published to a Topic: Topic.Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub service. Topic.Publish returns a PublishResult, which behaves like a future: its Get method blocks until the message has been sent to the service. The first time you call Topic.Publish on a Topic, goroutines are started in the background. To clean up these goroutines, call Topic.Stop: To receive messages published to a Topic, clients create a Subscription for the topic. There may be more than one subscription per topic ; each message that is published to the topic will be delivered to all associated subscriptions. A Subscription may be created like so: Messages are then consumed from a Subscription via callback. The callback is invoked concurrently by multiple goroutines, maximizing throughput. To terminate a call to Subscription.Receive, cancel its context. Once client code has processed the Message, it must call Message.Ack or Message.Nack; otherwise the Message will eventually be redelivered. Ack/Nack MUST be called within the Subscription.Receive handler function, and not from a goroutine. Otherwise, flow control (e.g. ReceiveSettings.MaxOutstandingMessages) will not be respected, and messages can get orphaned when cancelling Receive. If the client cannot or doesn't want to process the message, it can call Message.Nack to speed redelivery. For more information and configuration options, see Ack Deadlines below. Note: It is possible for a Message to be redelivered even if Message.Ack has been called. Client code must be robust to multiple deliveries of messages. Note: This uses pubsub's streaming pull feature. This feature has properties that may be surprising. Please take a look at https://cloud.google.com/pubsub/docs/pull#streamingpull for more details on how streaming pull behaves compared to the synchronous pull method. The number of StreamingPull connections can be configured by setting NumGoroutines in ReceiveSettings. The default value of 10 means the client library will maintain 10 StreamingPull connections. This is more than sufficient for most use cases, as StreamingPull connections can handle up to 10 MB/s https://cloud.google.com/pubsub/quotas#resource_limits. In some cases, using too many streams can lead to client library behaving poorly as the application becomes I/O bound. By default, the number of connections in the gRPC conn pool is min(4,GOMAXPROCS). Each connection supports up to 100 streams. Thus, if you have 4 or more CPU cores, the default setting allows a maximum of 400 streams which is already excessive for most use cases. If you want to change the limits on the number of streams, you can change the number of connections in the gRPC connection pool as shown below: The default pubsub deadlines are suitable for most use cases, but may be overridden. This section describes the tradeoffs that should be considered when overriding the defaults. Behind the scenes, each message returned by the Pub/Sub server has an associated lease, known as an "ack deadline". Unless a message is acknowledged within the ack deadline, or the client requests that the ack deadline be extended, the message will become eligible for redelivery. As a convenience, the pubsub client will automatically extend deadlines until either: Ack deadlines are extended periodically by the client. The period between extensions, as well as the length of the extension, automatically adjusts based on the time it takes the subscriber application to ack messages (based on the 99th percentile of ack latency). By default, this extension period is capped at 10m, but this limit can be configured by the "MaxExtensionPeriod" setting. This has the effect that subscribers that process messages quickly have their message ack deadlines extended for a short amount, whereas subscribers that process message slowly have their message ack deadlines extended for a large amount. The net effect is fewer RPCs sent from the client library. For example, consider a subscriber that takes 3 minutes to process each message. Since the library has already recorded several 3-minute "ack latencies"s in a percentile distribution, future message extensions are sent with a value of 3 minutes, every 3 minutes. Suppose the application crashes 5 seconds after the library sends such an extension: the Pub/Sub server would wait the remaining 2m55s before re-sending the messages out to other subscribers. Please note that by default, the client library does not use the subscription's AckDeadline for the MaxExtension value. For use cases where message processing exceeds 30 minutes, we recommend using the base client in a pull model, since long-lived streams are periodically killed by firewalls. See the example at https://godoc.org/cloud.google.com/go/pubsub/apiv1#example-SubscriberClient-Pull-LengthyClientProcessing To use an emulator with this library, you can set the PUBSUB_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Pub/Sub. You can then create and use a client as usual:
Package bigquery provides a client for the BigQuery service. The following assumes a basic familiarity with BigQuery concepts. See https://cloud.google.com/bigquery/docs. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client with NewClient: To query existing tables, create a Client.Query and call its Query.Read method, which starts the query and waits for it to complete: Then iterate through the resulting rows. You can store a row using anything that implements the ValueLoader interface, or with a slice or map of Value. A slice is simplest: You can also use a struct whose exported fields match the query: You can also start the query running and get the results later. Create the query as above, but call Query.Run instead of Query.Read. This returns a Job, which represents an asynchronous operation. Get the job's ID, a printable string. You can save this string to retrieve the results at a later time, even in another process. To retrieve the job's results from the ID, first look up the Job with the Client.JobFromID method: Use the Job.Read method to obtain an iterator, and loop over the rows. Calling Query.Read is preferred for queries with a relatively small result set, as it will call BigQuery jobs.query API for a optimized query path. If the query doesn't meet that criteria, the method will just combine Query.Run and Job.Read. You can refer to datasets in the client's project with the Client.Dataset method, and in other projects with the Client.DatasetInProject method: These methods create references to datasets, not the datasets themselves. You can have a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to create a dataset from a reference: You can refer to tables with Dataset.Table. Like Dataset, Table is a reference to an object in BigQuery that may or may not exist. You can create, delete and update the metadata of tables with methods on Table. For instance, you could create a temporary table with: We'll see how to create a table with a schema in the next section. There are two ways to construct schemas with this package. You can build a schema by hand with the Schema struct, like so: Or you can infer the schema from a struct with the InferSchema method: Struct inference supports tags like those of the encoding/json package, so you can change names, ignore fields, or mark a field as nullable (non-required). Fields declared as one of the Null types (NullInt64, NullFloat64, NullString, NullBool, NullTimestamp, NullDate, NullTime, NullDateTime, NullGeography, and NullJSON) are automatically inferred as nullable, so the "nullable" tag is only needed for []byte, *big.Rat and pointer-to-struct fields. Having constructed a schema, you can create a table with it using the Table.Create method like so: You can copy one or more tables to another table. Begin by constructing a Copier describing the copy using the Table.CopierFrom. Then set any desired copy options, and finally call Copier.Run to get a Job: You can chain the call to Copier.Run if you don't want to set options: You can wait for your job to complete with the Job.Wait method: Job.Wait polls with exponential backoff. You can also poll yourself, if you wish: There are two ways to populate a table with this package: load the data from a Google Cloud Storage object, or upload rows directly from your program. For loading, first create a GCSReference with the NewGCSReference method, configuring it if desired. Then make a Loader from a table with the Table.LoaderFrom method with the reference, optionally configure it as well, and call its Loader.Run method. To upload, first define a type that implements the ValueSaver interface, which has a single method named Save. Then create an Inserter, and call its Inserter.Put method with a slice of values. You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type to specify the schema and insert ID by hand: Lastly, but not least, you can just supply the struct or struct pointer directly and the schema will be inferred: BigQuery allows for higher throughput when omitting insertion IDs. To enable this, specify the sentinel NoDedupeID value for the insertion ID when implementing a ValueSaver. If you've been following so far, extracting data from a BigQuery table into a Google Cloud Storage object will feel familiar. First create an Extractor, then optionally configure it, and lastly call its Extractor.Run method. Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example: In some cases, your client may received unstructured googleapi.Error error responses. In such cases, it is likely that you have exceeded BigQuery request limits, documented at: https://cloud.google.com/bigquery/quotas
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/filters/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/encoding/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package bigtable is an API to Google Cloud Bigtable. See https://cloud.google.com/bigtable/docs/ for general product documentation. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. The principal way to read from a Bigtable is to use the ReadRows method on *Table. A RowRange specifies a contiguous portion of a table. A Filter may be provided through RowFilter to limit or transform the data that is returned. To read a single row, use the ReadRow helper method: This API exposes two distinct forms of writing to a Bigtable: a Mutation and a ReadModifyWrite. The former expresses idempotent operations. The latter expresses non-idempotent operations and returns the new values of updated cells. These operations are performed by creating a Mutation or ReadModifyWrite (with NewMutation or NewReadModifyWrite), building up one or more operations on that, and then using the Apply or ApplyReadModifyWrite methods on a Table. For instance, to set a couple of cells in a table: To increment an encoded value in one cell: If a read or write operation encounters a transient error it will be retried until a successful response, an unretryable error or the context deadline is reached. Non-idempotent writes (where the timestamp is set to ServerTime) will not be retried. In the case of ReadRows, retried calls will not re-scan rows that have already been processed.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. To use an emulator with this library, you can set the SPANNER_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Spanner. You can then create and use a client as usual: Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may be aborted automatically by the backend and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Cloud Spanner STRUCT (aka STRUCT) values (https://cloud.google.com/spanner/docs/data-types#struct-type) can be represented by a Go struct value. A proto StructType is built from the field types and field tag information of the Go struct. If a field in the struct type definition has a "spanner:<field_name>" tag, then the value of the "spanner" key in the tag is used as the name for that field in the built StructType, otherwise the field name in the struct definition is used. To specify a field with an empty field name in a Cloud Spanner STRUCT type, use the `spanner:""` tag annotation against the corresponding field in the Go struct's type definition. A STRUCT value can contain STRUCT-typed and Array-of-STRUCT typed fields and these can be specified using named struct-typed and []struct-typed fields inside a Go struct. However, embedded struct fields are not allowed. Unexported struct fields are ignored. NULL STRUCT values in Cloud Spanner are typed. A nil pointer to a Go struct value can be used to specify a NULL STRUCT value of the corresponding StructType. Nil and empty slices of a Go STRUCT type can be used to specify NULL and empty array values respectively of the corresponding StructType. A slice of pointers to a Go struct type can be used to specify an array of NULL-able STRUCT values. Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package datastore provides a client for Google Cloud Datastore. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Entities are the unit of storage and are associated with a key. A key consists of an optional parent key, a string application ID, a string kind (also known as an entity type), and either a StringID or an IntID. A StringID is also known as an entity name or key name. It is valid to create a key with a zero StringID and a zero IntID; this is called an incomplete key, and does not refer to any saved entity. Putting an entity into the datastore under an incomplete key will cause a unique key to be generated for that entity, with a non-zero IntID. An entity's contents are a mapping from case-sensitive field names to values. Valid value types are: Slices of structs are valid, as are structs that contain slices. The Get and Put functions load and save an entity's contents. An entity's contents are typically represented by a struct pointer. Example code: GetMulti, PutMulti and DeleteMulti are batch versions of the Get, Put and Delete functions. They take a []*Key instead of a *Key, and may return a datastore.MultiError when encountering partial failure. Mutate generalizes PutMulti and DeleteMulti to a sequence of any Datastore mutations. It takes a series of mutations created with NewInsert, NewUpdate, NewUpsert and NewDelete and applies them. Datastore.Mutate uses non-transactional mode; if atomicity is required, use Transaction.Mutate instead. An entity's contents can be represented by a variety of types. These are typically struct pointers, but can also be any type that implements the PropertyLoadSaver interface. If using a struct pointer, you do not have to explicitly implement the PropertyLoadSaver interface; the datastore will automatically convert via reflection. If a struct pointer does implement PropertyLoadSaver then those methods will be used in preference to the default behavior for struct pointers. Struct pointers are more strongly typed and are easier to use; PropertyLoadSavers are more flexible. The actual types passed do not have to match between Get and Put calls or even across different calls to datastore. It is valid to put a *PropertyList and get that same entity as a *myStruct, or put a *myStruct0 and get a *myStruct1. Conceptually, any entity is saved as a sequence of properties, and is loaded into the destination value on a property-by-property basis. When loading into a struct pointer, an entity that cannot be completely represented (such as a missing field) will result in an ErrFieldMismatch error but it is up to the caller whether this error is fatal, recoverable or ignorable. By default, for struct pointers, all properties are potentially indexed, and the property name is the same as the field name (and hence must start with an upper case letter). Fields may have a `datastore:"name,options"` tag. The tag name is the property name, which must be one or more valid Go identifiers joined by ".", but may start with a lower case letter. An empty tag name means to just use the field name. A "-" tag name means that the datastore will ignore that field. The only valid options are "omitempty", "noindex" and "flatten". If the options include "omitempty" and the value of the field is an empty value, then the field will be omitted on Save. Empty values are defined as false, 0, a nil pointer, a nil interface value, the zero time.Time, and any empty slice or string. (Empty slices are never saved, even without "omitempty".) Other structs, including GeoPoint, are never considered empty. If options include "noindex" then the field will not be indexed. All fields are indexed by default. Strings or byte slices longer than 1500 bytes cannot be indexed; fields used to store long strings and byte slices must be tagged with "noindex" or they will cause Put operations to fail. For a nested struct field, the options may also include "flatten". This indicates that the immediate fields and any nested substruct fields of the nested struct should be flattened. See below for examples. To use multiple options together, separate them by a comma. The order does not matter. If the options is "" then the comma may be omitted. Example code: A field of slice type corresponds to a Datastore array property, except for []byte, which corresponds to a Datastore blob. Zero-length slice fields are not saved. Slice fields of length 1 or greater are saved as Datastore arrays. When a zero-length Datastore array is loaded into a slice field, the slice field remains unchanged. If a non-array value is loaded into a slice field, the result will be a slice with one element, containing the value. Loading a Datastore Null into a basic type (int, float, etc.) results in a zero value. Loading a Null into a slice of basic type results in a slice of size 1 containing the zero value. Loading a Null into a pointer field results in nil. Loading a Null into a field of struct type is an error. A struct field can be a pointer to a signed integer, floating-point number, string or bool. Putting a non-nil pointer will store its dereferenced value. Putting a nil pointer will store a Datastore Null property, unless the field is marked omitempty, in which case no property will be stored. Loading a Null into a pointer field sets the pointer to nil. Loading any other value allocates new storage with the value, and sets the field to point to it. If the struct contains a *datastore.Key field tagged with the name "__key__", its value will be ignored on Put. When reading the Entity back into the Go struct, the field will be populated with the *datastore.Key value used to query for the Entity. Example code: If the struct pointed to contains other structs, then the nested or embedded structs are themselves saved as Entity values. For example, given these definitions: then an Outer would have one property, Inner, encoded as an Entity value. Note: embedded struct fields must be named to be encoded as an Entity. For example, in case of a type Outer with an embedded field Inner: all the Inner struct fields will be treated as fields of Outer itself. If an outer struct is tagged "noindex" then all of its implicit flattened fields are effectively "noindex". If the Inner struct contains a *Key field with the name "__key__", like so: then the value of K will be used as the Key for Inner, represented as an Entity value in datastore. If any nested struct fields should be flattened, instead of encoded as Entity values, the nested struct field should be tagged with the "flatten" option. For example, given the following: an Outer's properties would be equivalent to those of: Note that the "flatten" option cannot be used for Entity value fields or PropertyLoadSaver implementers. The server will reject any dotted field names for an Entity value. An entity's contents can also be represented by any type that implements the PropertyLoadSaver interface. This type may be a struct pointer, but it does not have to be. The datastore package will call Load when getting the entity's contents, and Save when putting the entity's contents. Possible uses include deriving non-stored fields, verifying fields, or indexing a field only if its value is positive. Example code: The *PropertyList type implements PropertyLoadSaver, and can therefore hold an arbitrary entity's contents. If a type implements the PropertyLoadSaver interface, it may also want to implement the KeyLoader interface. The KeyLoader interface exists to allow implementations of PropertyLoadSaver to also load an Entity's Key into the Go type. This type may be a struct pointer, but it does not have to be. The datastore package will call LoadKey when getting the entity's contents, after calling Load. Example code: To load a Key into a struct which does not implement the PropertyLoadSaver interface, see the "Key Field" section above. Queries retrieve entities based on their properties or key's ancestry. Running a query yields an iterator of results: either keys or (key, entity) pairs. Queries are re-usable and it is safe to call Query.Run from concurrent goroutines. Iterators are not safe for concurrent use. Queries are immutable, and are either created by calling NewQuery, or derived from an existing query by calling a method like Filter or Order that returns a new query value. A query is typically constructed by calling NewQuery followed by a chain of zero or more such methods. These methods are: Example code: Client.RunInTransaction runs a function in a transaction. Example code: Pass the ReadOnly option to RunInTransaction if your transaction is used only for Get, GetMulti or queries. Read-only transactions are more efficient. This package supports the Cloud Datastore emulator, which is useful for testing and development. Environment variables are used to indicate that datastore traffic should be directed to the emulator instead of the production Datastore service. To install and set up the emulator and its environment variables, see the documentation at https://cloud.google.com/datastore/docs/tools/datastore-emulator. To use the emulator with this library, you can set the DATASTORE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Datastore. You can then create and use a client as usual:
Package uuid generates and inspects UUIDs. UUIDs are based on RFC 4122 and DCE 1.1: Authentication and Security Services. A UUID is a 16 byte (128 bit) array. UUIDs may be used as keys to maps or compared directly.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include '<', '<=', '>', '>=', '==', 'in', 'array-contains', and 'array-contains-any'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Firestore supports similarity search over embedding vectors. See Query.FindNearest for details. You can partition the documents of a Collection Group allowing for smaller subqueries. You can also Serialize/Deserialize queries making it possible to run/stream the queries elsewhere; another process or machine for instance. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it. This package supports the Cloud Firestore emulator, which is useful for testing and development. Environment variables are used to indicate that Firestore traffic should be directed to the emulator instead of the production Firestore service. To install and run the emulator and its environment variables, see the documentation at https://cloud.google.com/sdk/gcloud/reference/beta/emulators/firestore/. Once the emulator is running, set FIRESTORE_EMULATOR_HOST to the API endpoint.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package sops manages JSON, YAML and BINARY documents to be encrypted or decrypted. This package should not be used directly. Instead, Sops users should install the command line client via `go get -u go.mozilla.org/sops/v3/cmd/sops`, or use the decryption helper provided at `go.mozilla.org/sops/v3/decrypt`. We do not guarantee API stability for any package other than `go.mozilla.org/sops/v3/decrypt`. A Sops document is a Tree composed of a data branch with arbitrary key/value pairs and a metadata branch with encryption and integrity information. In JSON and YAML formats, the structure of the cleartext tree is preserved, keys are stored in cleartext and only values are encrypted. Keeping the values in cleartext provides better readability when storing Sops documents in version controls, and allows for merging competing changes on documents. This is a major difference between Sops and other encryption tools that store documents as encrypted blobs. In BINARY format, the cleartext data is treated as a single blob and the encrypted document is in JSON format with a single `data` key and a single encrypted value. Sops allows operators to encrypt their documents with multiple master keys. Each of the master key defined in the document is able to decrypt it, allowing users to share documents amongst themselves without sharing keys, or using a PGP key as a backup for KMS. In practice, this is achieved by generating a data key for each document that is used to encrypt all values, and encrypting the data with each master key defined. Being able to decrypt the data key gives access to the document. The integrity of each document is guaranteed by calculating a Message Authentication Code (MAC) that is stored encrypted by the data key. When decrypting a document, the MAC should be recalculated and compared with the MAC stored in the document to verify that no fraudulent changes have been applied. The MAC covers keys and values as well as their ordering.
Package cloud is the root of the packages used to access Google Cloud Services. See https://godoc.org/cloud.google.com/go for a full list of sub-packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in this package for details. By default, all requests in sub-packages will run indefinitely, retrying on transient errors when correctness allows. To set timeouts or arrange for cancellation, use contexts. See the examples for details. Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP (bigquery, compute, storage, and translate) rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the default http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport. For gRPC clients (all others in this repo), connection pooling is configurable. Users of cloud client libraries may specify option.WithGRPCConnectionPool(n) as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and addressed in a round robin fashion. Minimal docker images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/GoogleCloudPlatform/google-cloud-go/issues/928 for more information. To see gRPC logs, set the environment variable GRPC_GO_LOG_SEVERITY_LEVEL. See https://godoc.org/google.golang.org/grpc/grpclog for more information. For HTTP logging, set the GODEBUG environment variable to "http2debug=1" or "http2debug=2". Google Application Default Credentials is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://developers.google.com/identity/protocols/application-default-credentials. To arrange for an RPC to be canceled, use context.WithCancel. You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.developers.google.com/permissions/serviceaccounts. This example uses the Datastore client, but the same steps apply to the other client libraries underneath this package. In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. The google package in this example is at golang.org/x/oauth2/google. This example uses the PubSub client, but the same steps apply to the other client libraries underneath this package. To set a timeout for an RPC, use context.WithTimeout.
Package ssooidc provides the API client, operations, and parameter types for AWS SSO OIDC. IAM Identity Center OpenID Connect (OIDC) is a web service that enables a client (such as CLI or a native application) to register with IAM Identity Center. The service also enables the client to fetch the user’s access token upon successful authentication and authorization with IAM Identity Center. IAM Identity Center uses the sso and identitystore API namespaces. Before you begin using this guide, we recommend that you first review the following important information about how the IAM Identity Center OIDC service works. The IAM Identity Center OIDC service currently implements only the portions of the OAuth 2.0 Device Authorization Grant standard (https://tools.ietf.org/html/rfc8628 ) that are necessary to enable single sign-on authentication with the CLI. With older versions of the CLI, the service only emits OIDC access tokens, so to obtain a new token, users must explicitly re-authenticate. To access the OIDC flow that supports token refresh and doesn’t require re-authentication, update to the latest CLI version (1.27.10 for CLI V1 and 2.9.0 for CLI V2) with support for OIDC token refresh and configurable IAM Identity Center session durations. For more information, see Configure Amazon Web Services access portal session duration. The access tokens provided by this service grant access to all Amazon Web Services account entitlements assigned to an IAM Identity Center user, not just a particular application. The documentation in this guide does not describe the mechanism to convert the access token into Amazon Web Services Auth (“sigv4”) credentials for use with IAM-protected Amazon Web Services service endpoints. For more information, see GetRoleCredentialsin the IAM Identity Center Portal API Reference Guide. For general information about IAM Identity Center, see What is IAM Identity Center? in the IAM Identity Center User Guide.
Package sessions provides cookie and filesystem sessions and infrastructure for custom session backends. The key features are: Let's start with an example that shows the sessions API in a nutshell: First we initialize a session store calling NewCookieStore() and passing a secret key used to authenticate the session. Inside the handler, we call store.Get() to retrieve an existing session or a new one. Then we set some session values in session.Values, which is a map[interface{}]interface{}. And finally we call session.Save() to save the session in the response. Note that in production code, we should check for errors when calling session.Save(r, w), and either display an error message or otherwise handle it. Save must be called before writing to the response, otherwise the session cookie will not be sent to the client. That's all you need to know for the basic usage. Let's take a look at other options, starting with flash messages. Flash messages are session values that last until read. The term appeared with Ruby On Rails a few years back. When we request a flash message, it is removed from the session. To add a flash, call session.AddFlash(), and to get all flashes, call session.Flashes(). Here is an example: Flash messages are useful to set information to be read after a redirection, like after form submissions. There may also be cases where you want to store a complex datatype within a session, such as a struct. Sessions are serialised using the encoding/gob package, so it is easy to register new datatypes for storage in sessions: As it's not possible to pass a raw type as a parameter to a function, gob.Register() relies on us passing it a value of the desired type. In the example above we've passed it a pointer to a struct and a pointer to a custom type representing a map[string]interface. (We could have passed non-pointer values if we wished.) This will then allow us to serialise/deserialise values of those types to and from our sessions. Note that because session values are stored in a map[string]interface{}, there's a need to type-assert data when retrieving it. We'll use the Person struct we registered above: By default, session cookies last for a month. This is probably too long for some cases, but it is easy to change this and other attributes during runtime. Sessions can be configured individually or the store can be configured and then all sessions saved using it will use that configuration. We access session.Options or store.Options to set a new configuration. The fields are basically a subset of http.Cookie fields. Let's change the maximum age of a session to one week: Sometimes we may want to change authentication and/or encryption keys without breaking existing sessions. The CookieStore supports key rotation, and to use it you just need to set multiple authentication and encryption keys, in pairs, to be tested in order: New sessions will be saved using the first pair. Old sessions can still be read because the first pair will fail, and the second will be tested. This makes it easy to "rotate" secret keys and still be able to validate existing sessions. Note: for all pairs the encryption key is optional; set it to nil or omit it and and encryption won't be used. Multiple sessions can be used in the same request, even with different session backends. When this happens, calling Save() on each session individually would be cumbersome, so we have a way to save all sessions at once: it's sessions.Save(). Here's an example: This is possible because when we call Get() from a session store, it adds the session to a common registry. Save() uses it to save all registered sessions.
Package goth provides a simple, clean, and idiomatic way to write authentication packages for Go web applications. This package was inspired by https://github.com/intridea/omniauth. See the examples folder for a working application that lets users authenticate through Twitter or Facebook.
This example shows how to cache service principal authentication data persistently to make it accessible to multiple processes. The example uses ClientCertificateCredential, however the pattern is the same for all service principal credential types having a Cache field in their options. The key steps are: Credentials that authenticate users such as InteractiveBrowserCredential have a different pattern; see the persistent user authentication example. This example shows how to cache authentication data persistently so a user doesn't need to authenticate interactively every time the application runs. The example uses InteractiveBrowserCredential, however DeviceCodeCredential has the same API. The key steps are: This examples applies to credentials that authenticate users. For credentials authenticating service principal, see the persistent service principal authentication example. This example demonstrates how to use azidentity to authenticate a go-redis client connecting to Azure Cache for Redis. See the Azure Cache for Redis documentation for information on configuring a cache to use Entra ID authentication.
Package oauth2clientauthextension implements `cauth.Client` This extension provides OAuth2 Client Credentials flow authenticator for HTTP and gRPC based exporters. The extension fetches and refreshes the token after expiry For further details about OAuth2 Client Credentials flow refer https://datatracker.ietf.org/doc/html/rfc6749#section-4.4
Package ldap provides basic LDAP v3 functionality. https://tools.ietf.org/html/rfc4511 Example User Authentication shows how a typical application can verify a login attempt
Package sigv4authextension implements the `auth.Client` interface. This extension provides the Sigv4 process of adding authentication information to AWS API requests sent by HTTP. As such, the extension can be used for HTTP based exporters that export to AWS services.
Package basicauthextension implements an extension offering basic auth authentication over HTTP.
Package ldap provides basic LDAP v3 functionality. Example User Authentication shows how a typical application can verify a login attempt
Package ldap provides basic LDAP v3 functionality. https://tools.ietf.org/html/rfc4511 Example User Authentication shows how a typical application can verify a login attempt
Package gophercloud provides a multi-vendor interface to OpenStack-compatible clouds. The library has a three-level hierarchy: providers, services, and resources. Provider structs represent the cloud providers that offer and manage a collection of services. You will generally want to create one Provider client per OpenStack cloud. Use your OpenStack credentials to create a Provider client. The IdentityEndpoint is typically refered to as "auth_url" or "OS_AUTH_URL" in information provided by the cloud operator. Additionally, the cloud may refer to TenantID or TenantName as project_id and project_name. Credentials are specified like so: You can authenticate with a token by doing: You may also use the openstack.AuthOptionsFromEnv() helper function. This function reads in standard environment variables frequently found in an OpenStack `openrc` file. Again note that Gophercloud currently uses "tenant" instead of "project". Service structs are specific to a provider and handle all of the logic and operations for a particular OpenStack service. Examples of services include: Compute, Object Storage, Block Storage. In order to define one, you need to pass in the parent provider, like so: Resource structs are the domain models that services make use of in order to work with and represent the state of API resources: Intermediate Result structs are returned for API operations, which allow generic access to the HTTP headers, response body, and any errors associated with the network transaction. To turn a result into a usable resource struct, you must call the Extract method which is chained to the response, or an Extract function from an applicable extension: All requests that enumerate a collection return a Pager struct that is used to iterate through the results one page at a time. Use the EachPage method on that Pager to handle each successive Page in a closure, then use the appropriate extraction method from that request's package to interpret that Page as a slice of results: If you want to obtain the entire collection of pages without doing any intermediary processing on each page, you can use the AllPages method: This top-level package contains utility functions and data types that are used throughout the provider and service packages. Of particular note for end users are the AuthOptions and EndpointOpts structs. An example retry backoff function, which respects the 429 HTTP response code and a "Retry-After" header:
Package lambda provides the API client, operations, and parameter types for AWS Lambda. Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. With Lambda, you can run code for virtually any type of application or backend service. For more information about the Lambda service, see What is Lambdain the Lambda Developer Guide. The Lambda API Reference provides information about each of the API methods, including details about the parameters in each API request and response. You can use Software Development Kits (SDKs), Integrated Development Environment (IDE) Toolkits, and command line tools to access the API. For installation instructions, see Tools for Amazon Web Services. For a list of Region-specific endpoints that Lambda supports, see Lambda endpoints and quotas in the Amazon Web Services General Reference.. When making the API calls, you will need to authenticate your request by providing a signature. Lambda supports signature version 4. For more information, see Signature Version 4 signing processin the Amazon Web Services General Reference.. Because Amazon Web Services SDKs use the CA certificates from your computer, changes to the certificates on the Amazon Web Services servers can cause connection failures when you attempt to use an SDK. You can prevent these failures by keeping your computer's CA certificates and operating system up-to-date. If you encounter this issue in a corporate environment and do not manage your own computer, you might need to ask an administrator to assist with the update process. The following list shows minimum operating system and Java versions: Microsoft Windows versions that have updates from January 2005 or later installed contain at least one of the required CAs in their trust list. Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5 (February 2007), Mac OS X 10.5 (October 2007), and later versions contain at least one of the required CAs in their trust list. Red Hat Enterprise Linux 5 (March 2007), 6, and 7 and CentOS 5, 6, and 7 all contain at least one of the required CAs in their default trusted CA list. Java 1.4.2_12 (May 2006), 5 Update 2 (March 2005), and all later versions, including Java 6 (December 2006), 7, and 8, contain at least one of the required CAs in their default trusted CA list. When accessing the Lambda management console or Lambda API endpoints, whether through browsers or programmatically, you will need to ensure your client machines support any of the following CAs: Amazon Root CA 1 Starfield Services Root Certificate Authority - G2 Starfield Class 2 Certification Authority Root certificates from the first two authorities are available from Amazon trust services, but keeping your computer up-to-date is the more straightforward solution. To learn more about ACM-provided certificates, see Amazon Web Services Certificate Manager FAQs.
Package pubsublite provides an easy way to publish and receive messages using the Pub/Sub Lite service. Google Pub/Sub services are designed to provide reliable, many-to-many, asynchronous messaging between applications. Publisher applications can send messages to a topic and other applications can subscribe to that topic to receive the messages. By decoupling senders and receivers, Google Pub/Sub allows developers to communicate between independently written applications. Compared to Cloud Pub/Sub, Pub/Sub Lite provides partitioned data storage with predefined throughput and storage capacity. Guidance on how to choose between Cloud Pub/Sub and Pub/Sub Lite is available at https://cloud.google.com/pubsub/docs/choosing-pubsub-or-lite. More information about Pub/Sub Lite is available at https://cloud.google.com/pubsub/lite. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Examples can be found at https://pkg.go.dev/cloud.google.com/go/pubsublite#pkg-examples and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#pkg-examples. Complete sample programs can be found at https://github.com/GoogleCloudPlatform/golang-samples/tree/master/pubsublite. The cloud.google.com/go/pubsublite/pscompat subpackage contains clients for publishing and receiving messages, which have similar interfaces to their pubsub.Topic and pubsub.Subscription counterparts in cloud.google.com/go/pubsub. The following examples demonstrate how to declare common interfaces: https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewPublisherClient-Interface and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewSubscriberClient-Interface. The following imports are required for code snippets below: Messages are published to topics. Pub/Sub Lite topics may be created like so: Close must be called to release resources when an AdminClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/topics for more information about how Pub/Sub Lite topics are configured. See https://cloud.google.com/pubsub/lite/docs/locations for the list of locations where Pub/Sub Lite is available. Pub/Sub Lite uses gRPC streams extensively for high throughput. For more differences, see https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat. To publish messages to a topic, first create a PublisherClient: Then call Publish: Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub Lite service. Thresholds for batching can be configured in PublishSettings. Publish returns a PublishResult, which behaves like a future; its Get method blocks until the message has been sent (or has failed to be sent) to the service: Once you've finishing publishing all messages, call Stop to flush all messages to the service and close gRPC streams. The PublisherClient can no longer be used after it has been stopped or has terminated due to a permanent error. PublisherClients are expected to be long-lived and used for the duration of the application, rather than for publishing small batches of messages. Stop must be called to release resources when a PublisherClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/publishing for more information about publishing. To receive messages published to a topic, create a subscription to the topic. There may be more than one subscription per topic; each message that is published to the topic will be delivered to all of its subscriptions. Pub/Sub Lite subscriptions may be created like so: See https://cloud.google.com/pubsub/lite/docs/subscriptions for more information about how subscriptions are configured. To receive messages for a subscription, first create a SubscriberClient: Messages are then consumed from a subscription via callback. The callback may be invoked concurrently by multiple goroutines (one per partition that the subscriber client is connected to). Receive blocks until either the context is canceled or a permanent error occurs. To terminate a call to Receive, cancel its context: Clients must call pubsub.Message.Ack() or pubsub.Message.Nack() for every message received. Pub/Sub Lite does not have ACK deadlines. Pub/Sub Lite also does not actually have the concept of NACK. The default behavior terminates the SubscriberClient. In Pub/Sub Lite, only a single subscriber for a given subscription is connected to any partition at a time, and there is no other client that may be able to handle messages. See https://cloud.google.com/pubsub/lite/docs/subscribing for more information about receiving messages. Pub/Sub Lite utilizes gRPC streams extensively. gRPC allows a maximum of 100 streams per connection. Internally, the library uses a default connection pool size of 8, which supports up to 800 topic partitions. To alter the connection pool size, pass a ClientOption to pscompat.NewPublisherClient and pscompat.NewSubscriberClient:
Package update provides functionality to implement secure, self-updating Go programs (or other single-file targets). For complete updating solutions please see Equinox (https://equinox.io) and go-tuf (https://github.com/flynn/go-tuf). This example shows how to update a program remotely from a URL. Go binaries can often be large. It can be advantageous to only ship a binary patch to a client instead of the complete program text of a new version. This example shows how to update a program with a bsdiff binary patch. Other patch formats may be applied by implementing the Patcher interface. Updating executable code on a computer can be a dangerous operation unless you take the appropriate steps to guarantee the authenticity of the new code. While checksum verification is important, it should always be combined with signature verification (next section) to guarantee that the code came from a trusted party. go-update validates SHA256 checksums by default, but this is pluggable via the Hash property on the Options struct. This example shows how to guarantee that the newly-updated binary is verified to have an appropriate checksum (that was otherwise retrived via a secure channel) specified as a hex string. Cryptographic verification of new code from an update is an extremely important way to guarantee the security and integrity of your updates. Verification is performed by validating the signature of a hash of the new file. This means nothing changes if you apply your update with a patch. This example shows how to add signature verification to your updates. To make all of this work an application distributor must first create a public/private key pair and embed the public key into their application. When they issue a new release, the issuer must sign the new executable file with the private key and distribute the signature along with the update. In order to update a Go application with go-update, you must distributed it as a single executable. This is often easy, but some applications require static assets (like HTML and CSS asset files or TLS certificates). In order to update applications like these, you'll want to make sure to embed those asset files into the distributed binary with a tool like go-bindata (my favorite): https://github.com/jteeuwen/go-bindata Mechanisms and protocols for determining whether an update should be applied and, if so, which one are out of scope for this package. Please consult go-tuf (https://github.com/flynn/go-tuf) or Equinox (https://equinox.io) for more complete solutions. go-update only works for self-updating applications that are distributed as a single binary, i.e. applications that do not have additional assets or dependency files. Updating application that are distributed as mutliple on-disk files is out of scope, although this may change in future versions of this library.
Package otp implements both HOTP and TOTP based one time passcodes in a Google Authenticator compatible manner. When adding a TOTP for a user, you must store the "secret" value persistently. It is recommended to store the secret in an encrypted field in your datastore. Due to how TOTP works, it is not possible to store a hash for the secret value like you would a password. To enroll a user, you must first generate an OTP for them. Google Authenticator supports using a QR code as an enrollment method: Validating a TOTP passcode is very easy, just prompt the user for a passcode and retrieve the associated user's previously stored secret.
Package awsproxy defines an extension that accepts requests without any authentication of AWS signatures applied and forwards them to the AWS API, applying authentication and signing.
Package awsprometheusremotewriteexporter provides a Prometheus Remote Write Exporter with AWS Sigv4 authentication
Package aztables can access an Azure Storage or CosmosDB account. The aztables package is capable of: The Azure Data Tables library allows you to interact with two types of resources: * the tables in your account * the entities within those tables. Interaction with these resources starts with an instance of a client. To create a client object, you will need the account's table service endpoint URL and a credential that allows you to access the account. The clients support different forms of authentication. The aztables library supports any of the `azcore.TokenCredential` interfaces, authorization via a Connection String, or authorization with a Shared Access Signature token. To use an account shared key (aka account key or access key), provide the key as a string. This can be found in your storage account in the Azure Portal under the "Access Keys" section. Use the key as the credential parameter to authenticate the client: Using a Connection String Depending on your use case and authorization method, you may prefer to initialize a client instance with a connection string instead of providing the account URL and credential separately. To do this, pass the connection string to the client's `from_connection_string` class method. The connection string can be found in your storage account in the [Azure Portal][azure_portal_account_url] under the "Access Keys" section or with the following Azure CLI command: Using a Shared Access Signature To use a shared access signature (SAS) token, provide the token at the end of your service URL. You can generate a SAS token from the Azure Portal under Shared Access Signature or use the ServiceClient.GetAccountSASToken or Client.GetTableSASToken() functions. Common uses of the Table service included: * Storing TBs of structured data capable of serving web scale applications * Storing datasets that do not require complex joins, foreign keys, or stored procedures and can be de-normalized for fast access * Quickly querying data using a clustered index * Accessing data using the OData protocol and LINQ filter expressions The following components make up the Azure Data Tables Service: * The account * A table within the account, which contains a set of entities * An entity within a table, as a dictionary The Azure Data Tables client library for Go allows you to interact with each of these components through the use of a dedicated client object. Two different clients are provided to interact with the various components of the Table Service: 1. **`ServiceClient`** - 2. **`Client`** - Entities are similar to rows. An entity has a PartitionKey, a RowKey, and a set of properties. A property is a name value pair, similar to a column. Every entity in a table does not need to have the same properties. Entities are returned as JSON, allowing developers to use JSON marshalling and unmarshalling techniques. Additionally, you can use the aztables.EDMEntity to ensure proper round-trip serialization of all properties. The following sections provide several code snippets covering some of the most common Table tasks, including: * Creating a table * Creating entities * Querying entities Create a table in your account and get a `Client` to perform operations on the newly created table: Creating Entities Querying entities
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package gokrb5 provides a Kerberos 5 implementation for Go. This is a pure Go implementation and does not have dependencies on native libraries. Feature include: HTTP handler wrapper implements SPNEGO Kerberos authentication. HTTP handler wrapper decodes Microsoft AD PAC authorization data. Client that can authenticate to an SPNEGO Kerberos authenticated web service. Ability to change client's password. Kerberos libraries for custom integration. Parsing Keytab files. Parsing krb5.conf files.
Package cognitoidentityprovider provides the API client, operations, and parameter types for Amazon Cognito Identity Provider. With the Amazon Cognito user pools API, you can configure user pools and authenticate users. To authenticate users from third-party identity providers (IdPs) in this API, you can link IdP users to native user profiles. Learn more about the authentication and authorization of federated users at Adding user pool sign-in through a third partyand in the User pool federation endpoints and hosted UI reference. This API reference provides detailed information about API operations and object types in Amazon Cognito. Along with resource management operations, the Amazon Cognito user pools API includes classes of operations and authorization models for client-side and server-side authentication of users. You can interact with operations in the Amazon Cognito user pools API as any of the following subjects. An administrator who wants to configure user pools, app clients, users, groups, or other user pool functions. A server-side app, like a web application, that wants to use its Amazon Web Services privileges to manage, authenticate, or authorize a user. A client-side app, like a mobile app, that wants to make unauthenticated requests to manage, authenticate, or authorize a user. For more information, see Using the Amazon Cognito user pools API and user pool endpoints in the Amazon Cognito Developer Guide. With your Amazon Web Services SDK, you can build the logic to support operational flows in every use case for this API. You can also make direct REST API requests to Amazon Cognito user pools service endpoints. The following links can get you started with the CognitoIdentityProvider client in other supported Amazon Web Services SDKs. Amazon Web Services Command Line Interface Amazon Web Services SDK for .NET Amazon Web Services SDK for C++ Amazon Web Services SDK for Go Amazon Web Services SDK for Java V2 Amazon Web Services SDK for JavaScript Amazon Web Services SDK for PHP V3 Amazon Web Services SDK for Python Amazon Web Services SDK for Ruby V3 To get started with an Amazon Web Services SDK, see Tools to Build on Amazon Web Services. For example actions and scenarios, see Code examples for Amazon Cognito Identity Provider using Amazon Web Services SDKs.
Package auth implements the configuration settings to ensure authentication on incoming requests, and allows exporters to add authentication on outgoing requests.
Package configauth implements the configuration settings to ensure authentication on incoming requests, and allows exporters to add authentication on outgoing requests.
Package securecookie encodes and decodes authenticated and optionally encrypted cookie values. Secure cookies can't be forged, because their values are validated using HMAC. When encrypted, the content is also inaccessible to malicious eyes. To use it, first create a new SecureCookie instance: The hashKey is required, used to authenticate the cookie value using HMAC. It is recommended to use a key with 32 or 64 bytes. The blockKey is optional, used to encrypt the cookie value -- set it to nil to not use encryption. If set, the length must correspond to the block size of the encryption algorithm. For AES, used by default, valid lengths are 16, 24, or 32 bytes to select AES-128, AES-192, or AES-256. Strong keys can be created using the convenience function GenerateRandomKey(). Once a SecureCookie instance is set, use it to encode a cookie value: Later, use the same SecureCookie instance to decode and validate a cookie value: We stored a map[string]string, but secure cookies can hold any value that can be encoded using encoding/gob. To store custom types, they must be registered first using gob.Register(). For basic types this is not needed; it works out of the box.
Package azcosmos implements the client to interact with the Azure Cosmos DB SQL API. The azcosmos package is capable of: Types of Credentials The clients support different forms of authentication. The azcosmos library supports authorization via Azure Active Directory or an account key. Using Azure Active Directory To create a client, you can use any of the TokenCredential implementations provided by `azidentity`. Using account keys To create a client, you will need the account's endpoint URL and a key credential. Using connection string To create a client, you will need the account's connection string. The following are relevant concepts for the usage of the client: The following sections provide several code snippets covering some of the most common Table tasks, including: Create a database and obtain a `DatabaseClient` to perform operations on your newly created database. Create a container on an existing database and obtain a `ContainerClient` to perform operations on your newly created container. Creating, reading, and deleting items Querying items Querying items with parametrized queries Using Transactional batch
Package csrf (gorilla/csrf) provides Cross Site Request Forgery (CSRF) prevention middleware for Go web applications & services. It includes: * The `csrf.Protect` middleware/handler provides CSRF protection on routes attached to a router or a sub-router. * A `csrf.Token` function that provides the token to pass into your response, whether that be a HTML form or a JSON response body. * ... and a `csrf.TemplateField` helper that you can pass into your `html/template` templates to replace a `{{ .csrfField }}` template tag with a hidden input field. gorilla/csrf is easy to use: add the middleware to individual handlers with the below: ... and then collect the token with `csrf.Token(r)` before passing it to the template, JSON body or HTTP header (you pick!). gorilla/csrf inspects the form body (first) and HTTP headers (second) on subsequent POST/PUT/PATCH/DELETE/etc. requests for the token. Note that the authentication key passed to `csrf.Protect([]byte(key))` should be 32-bytes long and persist across application restarts. Generating a random key won't allow you to authenticate existing cookies and will break your CSRF validation. Here's the common use-case: HTML forms you want to provide CSRF protection for, in order to protect malicious POST requests being made: Note that the CSRF middleware will (by necessity) consume the request body if the token is passed via POST form values. If you need to consume this in your handler, insert your own middleware earlier in the chain to capture the request body. You can also send the CSRF token in the response header. This approach is useful if you're using a front-end JavaScript framework like Ember or Angular, or are providing a JSON API: If you're writing a client that's supposed to mimic browser behavior, make sure to send back the CSRF cookie (the default name is _gorilla_csrf, but this can be changed with the CookieName Option) along with either the X-CSRF-Token header or the gorilla.csrf.Token form field. In addition: getting CSRF protection right is important, so here's some background: * This library generates unique-per-request (masked) tokens as a mitigation against the BREACH attack (http://breachattack.com/). * The 'base' (unmasked) token is stored in the session, which means that multiple browser tabs won't cause a user problems as their per-request token is compared with the base token. * Operates on a "whitelist only" approach where safe (non-mutating) HTTP methods (GET, HEAD, OPTIONS, TRACE) are the *only* methods where token validation is not enforced. * The design is based on the battle-tested Django (https://docs.djangoproject.com/en/1.8/ref/csrf/) and Ruby on Rails (http://api.rubyonrails.org/classes/ActionController/RequestForgeryProtection.html) approaches. * Cookies are authenticated and based on the securecookie (https://github.com/gorilla/securecookie) library. They're also Secure (issued over HTTPS only) and are HttpOnly by default, because sane defaults are important. * Go's `crypto/rand` library is used to generate the 32 byte (256 bit) tokens and the one-time-pad used for masking them. This library does not seek to be adventurous.