Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include '<', '<=', '>', '>=', '==', 'in', 'array-contains', and 'array-contains-any'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Firestore supports similarity search over embedding vectors. See Query.FindNearest for details. You can partition the documents of a Collection Group allowing for smaller subqueries. You can also Serialize/Deserialize queries making it possible to run/stream the queries elsewhere; another process or machine for instance. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it. This package supports the Cloud Firestore emulator, which is useful for testing and development. Environment variables are used to indicate that Firestore traffic should be directed to the emulator instead of the production Firestore service. To install and run the emulator and its environment variables, see the documentation at https://cloud.google.com/sdk/gcloud/reference/beta/emulators/firestore/. Once the emulator is running, set FIRESTORE_EMULATOR_HOST to the API endpoint.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package promptui is a library providing a simple interface to create command-line prompts for go. It can be easily integrated into spf13/cobra, urfave/cli or any cli go application. promptui has two main input modes: Prompt provides a single line for user input. It supports optional live validation, confirmation and masking the input. Select provides a list of options to choose from. It supports pagination, search, detailed view and custom templates. This is an example for the Prompt mode of promptui. In this example, a prompt is created with a validator function that validates the given value to make sure its a number. If successful, it will output the chosen number in a formatted message. This is an example for the Select mode of promptui. In this example, a select is created with the days of the week as its items. When an item is selected, the selected day will be displayed in a formatted message.
Package config provides utilities for loading configuration from multiple sources that can be used to configure the SDK's API clients, and utilities. The config package will load configuration from environment variables, AWS shared configuration file (~/.aws/config), and AWS shared credentials file (~/.aws/credentials). Use the LoadDefaultConfig to load configuration from all the SDK's supported sources, and resolve credentials using the SDK's default credential chain. LoadDefaultConfig allows for a variadic list of additional Config sources that can provide one or more configuration values which can be used to programmatically control the resolution of a specific value, or allow for broader range of additional configuration sources not supported by the SDK. A Config source implements one or more provider interfaces defined in this package. Config sources passed in will take precedence over the default environment and shared config sources used by the SDK. If one or more Config sources implement the same provider interface, priority will be handled by the order in which the sources were passed in. A number of helpers (prefixed by “With“) are provided in this package that implement their respective provider interface. These helpers should be used for overriding configuration programmatically at runtime.
Package cloud is the root of the packages used to access Google Cloud Services. See https://godoc.org/cloud.google.com/go for a full list of sub-packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in this package for details. By default, all requests in sub-packages will run indefinitely, retrying on transient errors when correctness allows. To set timeouts or arrange for cancellation, use contexts. See the examples for details. Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP (bigquery, compute, storage, and translate) rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the default http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport. For gRPC clients (all others in this repo), connection pooling is configurable. Users of cloud client libraries may specify option.WithGRPCConnectionPool(n) as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and addressed in a round robin fashion. Minimal docker images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/GoogleCloudPlatform/google-cloud-go/issues/928 for more information. To see gRPC logs, set the environment variable GRPC_GO_LOG_SEVERITY_LEVEL. See https://godoc.org/google.golang.org/grpc/grpclog for more information. For HTTP logging, set the GODEBUG environment variable to "http2debug=1" or "http2debug=2". Google Application Default Credentials is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://developers.google.com/identity/protocols/application-default-credentials. To arrange for an RPC to be canceled, use context.WithCancel. You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.developers.google.com/permissions/serviceaccounts. This example uses the Datastore client, but the same steps apply to the other client libraries underneath this package. In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. The google package in this example is at golang.org/x/oauth2/google. This example uses the PubSub client, but the same steps apply to the other client libraries underneath this package. To set a timeout for an RPC, use context.WithTimeout.
The package defines a single interface and a few implementations of the interface. The externalization of the loop flow control makes it easy to test the internal functions of background goroutines by, for instance, only running the loop once while under test. The design is that any errors which need to be returned from the loop will be passed back on a channel whose implementation is left up to the individual Looper. Calling methods can wait on execution and for any resulting errors by calling the Wait() method on the Looper. In this example, we are going to run a FreeLooper with 5 iterations. In the course of running, an error is generated, which the parent function captures and outputs. As a result of the error only 3 of the 5 iterations are completed and the output reflects this.
Program gops is a tool to list currently running Go processes.
Package flags provides an extensive command line option parser. The flags package is similar in functionality to the go built-in flag package but provides more options and uses reflection to provide a convenient and succinct way of specifying command line options. The following features are supported in go-flags: Additional features specific to Windows: The flags package uses structs, reflection and struct field tags to allow users to specify command line options. This results in very simple and concise specification of your application options. For example: This specifies one option with a short name -v and a long name --verbose. When either -v or --verbose is found on the command line, a 'true' value will be appended to the Verbose field. e.g. when specifying -vvv, the resulting value of Verbose will be {[true, true, true]}. Slice options work exactly the same as primitive type options, except that whenever the option is encountered, a value is appended to the slice. Map options from string to primitive type are also supported. On the command line, you specify the value for such an option as key:value. For example Then, the AuthorInfo map can be filled with something like -a name:Jesse -a "surname:van den Kieboom". Finally, for full control over the conversion between command line argument values and options, user defined types can choose to implement the Marshaler and Unmarshaler interfaces. The following is a list of tags for struct fields supported by go-flags: Either the `short:` tag or the `long:` must be specified to make the field eligible as an option. Option groups are a simple way to semantically separate your options. All options in a particular group are shown together in the help under the name of the group. Namespaces can be used to specify option long names more precisely and emphasize the options affiliation to their group. There are currently three ways to specify option groups. The flags package also has basic support for commands. Commands are often used in monolithic applications that support various commands or actions. Take git for example, all of the add, commit, checkout, etc. are called commands. Using commands you can easily separate multiple functions of your application. There are currently two ways to specify a command. The most common, idiomatic way to implement commands is to define a global parser instance and implement each command in a separate file. These command files should define a go init function which calls AddCommand on the global parser. When parsing ends and there is an active command and that command implements the Commander interface, then its Execute method will be run with the remaining command line arguments. Command structs can have options which become valid to parse after the command has been specified on the command line, in addition to the options of all the parent commands. I.e. considering a -v flag on the parser and an add command, the following are equivalent: However, if the -v flag is defined on the add command, then the first of the two examples above would fail since the -v flag is not defined before the add command. go-flags has builtin support to provide bash completion of flags, commands and argument values. To use completion, the binary which uses go-flags can be invoked in a special environment to list completion of the current command line argument. It should be noted that this `executes` your application, and it is up to the user to make sure there are no negative side effects (for example from init functions). Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion by replacing the argument parsing routine with the completion routine which outputs completions for the passed arguments. The basic invocation to complete a set of arguments is therefore: where `completion-example` is the binary, `arg1` and `arg2` are the current arguments, and `arg3` (the last argument) is the argument to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then descriptions of possible completion items will also be shown, if there are more than 1 completion items. To use this with bash completion, a simple file can be written which calls the binary which supports go-flags completion: Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set. Customized completion for argument values is supported by implementing the flags.Completer interface for the argument value type. An example of a type which does so is the flags.Filename type, an alias of string allowing simple filename completion. A slice or array argument value whose element type implements flags.Completer will also be completed.
Package opentracing implements a bridge that forwards OpenTracing API calls to the OpenTelemetry SDK. To use the bridge, first create an OpenTelemetry tracer of choice. Then use the NewTracerPair() function to create two tracers - one implementing OpenTracing API (BridgeTracer) and one that implements the OpenTelemetry API (WrapperTracer) and mostly forwards the calls to the OpenTelemetry tracer of choice, but does some extra steps to make the interaction between both APIs working. If the OpenTelemetry tracer of choice already knows how to cooperate with OpenTracing API through the OpenTracing bridge (explained in detail below), then it is fine to skip the WrapperTracer by calling the NewBridgeTracer() function to get the bridge tracer and then passing the chosen OpenTelemetry tracer to the SetOpenTelemetryTracer() function of the bridge tracer. To use an OpenTelemetry span as the parent of an OpenTracing span, create a context using the ContextWithBridgeSpan() function of the bridge tracer, and then use the StartSpanFromContext function of the OpenTracing API. Bridge tracer also allows the user to install a warning handler through the SetWarningHandler() function. The warning handler will be called when there is some misbehavior of the OpenTelemetry tracer with regard to the cooperation with the OpenTracing API. For an OpenTelemetry tracer to cooperate with OpenTracing API through the BridgeTracer, the OpenTelemetry tracer needs to (reasoning is below the list): 1. Return the same context it received in the Start() function if migration.SkipContextSetup() returns true. 2. Implement the migration.DeferredContextSetupTracerExtension interface. The implementation should setup the context it would normally do in the Start() function if the migration.SkipContextSetup() function returned false. Calling ContextWithBridgeSpan() is not necessary. 3. Have an access to the BridgeTracer instance. 4. If the migration.SkipContextSetup() function returned false, the tracer should use the ContextWithBridgeSpan() function to install the created span as an active OpenTracing span. There are some differences between OpenTracing and OpenTelemetry APIs, especially with regard to Go context handling. When a span is created with an OpenTracing API (through the StartSpan() function) the Go context is not available. BridgeTracer has access to the OpenTelemetry tracer of choice, so in the StartSpan() function BridgeTracer translates the parameters to the OpenTelemetry version and uses the OpenTelemetry tracer's Start() function to actually create a span. The OpenTelemetry Start() function takes the Go context as a parameter, so BridgeTracer at this point passes a temporary context to Start(). All the changes to the temporary context will be lost at the end of the StartSpan() function, so the OpenTelemetry tracer of choice should not do anything with the context. If the returned context is different, BridgeTracer will warn about it. The OpenTelemetry tracer of choice can learn about this situation by using the migration.SkipContextSetup() function. The tracer will receive an opportunity to set up the context at a later stage. Usually after StartSpan() is finished, users of the OpenTracing API are calling (either directly or through the opentracing.StartSpanFromContext() helper function) the opentracing.ContextWithSpan() function to insert the created OpenTracing span into the context. At that time, the OpenTelemetry tracer of choice has a chance of setting up the context through a hook invoked inside the opentracing.ContextWithSpan() function. For that to happen, the tracer should implement the migration.DeferredContextSetupTracerExtension interface. This so far explains the need for points 1. and 2. When the span is created with the OpenTelemetry API (with the Start() function) then migration.SkipContextSetup() will return false. This means that the tracer can do the usual setup of the context, but it also should set up the active OpenTracing span in the context. This is because OpenTracing API is not used at all in the creation of the span, but the OpenTracing API may be used during the time when the created OpenTelemetry span is current. For this case to work, we need to also set up active OpenTracing span in the context. This can be done with the ContextWithBridgeSpan() function. This means that the OpenTelemetry tracer of choice needs to have an access to the BridgeTracer instance. This should explain the need for points 3. and 4. Another difference related to the Go context handling is in logging - OpenTracing API does not take a context parameter in the LogFields() function, so when the call to the function gets translated to OpenTelemetry AddEvent() function, an empty context is passed.
Package fx is a framework that makes it easy to build applications out of reusable, composable modules. Fx applications use dependency injection to eliminate globals without the tedium of manually wiring together function calls. Unlike other approaches to dependency injection, Fx works with plain Go functions: you don't need to use struct tags or embed special types, so Fx automatically works well with most Go packages. Basic usage is explained in the package-level example. If you're new to Fx, start there! Advanced features, including named instances, optional parameters, and value groups, are explained in this section further down. To test functions that use the Lifecycle type or to write end-to-end tests of your Fx application, use the helper functions and types provided by the go.uber.org/fx/fxtest package. Fx constructors declare their dependencies as function parameters. This can quickly become unreadable if the constructor has a lot of dependencies. To improve the readability of constructors like this, create a struct that lists all the dependencies as fields and change the function to accept that struct instead. The new struct is called a parameter struct. Fx has first class support for parameter structs: any struct embedding fx.In gets treated as a parameter struct, so the individual fields in the struct are supplied via dependency injection. Using a parameter struct, we can make the constructor above much more readable: Though it's rarelly necessary to mix the two, constructors can receive any combination of parameter structs and parameters. Result structs are the inverse of parameter structs. These structs represent multiple outputs from a single function as fields. Fx treats all structs embedding fx.Out as result structs, so other constructors can rely on the result struct's fields directly. Without result structs, we sometimes have function definitions like this: With result structs, we can make this both more readable and easier to modify in the future: Some use cases require the application container to hold multiple values of the same type. A constructor that produces a result struct can tag any field with `name:".."` to have the corresponding value added to the graph under the specified name. An application may contain at most one unnamed value of a given type, but may contain any number of named values of the same type. Similarly, a constructor that accepts a parameter struct can tag any field with `name:".."` to have the corresponding value injected by name. Note that both the name AND type of the fields on the parameter struct must match the corresponding result struct. Constructors often have optional dependencies on some types: if those types are missing, they can operate in a degraded state. Fx supports optional dependencies via the `optional:"true"` tag to fields on parameter structs. If an optional field isn't available in the container, the constructor receives the field's zero value. Constructors that declare optional dependencies MUST gracefully handle situations in which those dependencies are absent. The optional tag also allows adding new dependencies without breaking existing consumers of the constructor. The optional tag may be combined with the name tag to declare a named value dependency optional. To make it easier to produce and consume many values of the same type, Fx supports named, unordered collections called value groups. Constructors can send values into value groups by returning a result struct tagged with `group:".."`. Any number of constructors may provide values to this named collection, but the ordering of the final collection is unspecified. Value groups require parameter and result structs to use fields with different types: if a group of constructors each returns type T, parameter structs consuming the group must use a field of type []T. Parameter structs can request a value group by using a field of type []T tagged with `group:".."`. This will execute all constructors that provide a value to that group in an unspecified order, then collect all the results into a single slice. Note that values in a value group are unordered. Fx makes no guarantees about the order in which these values will be produced. By default, when a constructor declares a dependency on a value group, all values provided to that value group are eagerly instantiated. That is undesirable for cases where an optional component wants to constribute to a value group, but only if it was actually used by the rest of the application. A soft value group can be thought of as a best-attempt at populating the group with values from constructors that have already run. In other words, if a constructor's output type is only consumed by a soft value group, it will not be run. Note that Fx randomizes the order of values in the value group, so the slice of values may not match the order in which constructors were run. To declare a soft relationship between a group and its constructors, use the `soft` option on the input group tag (`group:"[groupname],soft"`). This option is only valid for input parameters. With such a declaration, a constructor that provides a value to the 'server' value group will be called only if there's another instantiated component that consumes the results of that constructor. NewHandlerAndLogger will be called because the Logger is consumed by the application, but NewHandler will not be called because it's only consumed by the soft value group. By default, values of type T produced to a value group are consumed as []T. This means that if the producer produces []T, the consumer must consume [][]T. There are cases where it's desirable for the producer (the fx.Out) to produce multiple values ([]T), and for the consumer (the fx.In) consume them as a single slice ([]T). Fx offers flattened value groups for this purpose. To provide multiple values for a group from a result struct, produce a slice and use the `,flatten` option on the group tag. This indicates that each element in the slice should be injected into the group individually. By default, a type that embeds fx.In may not have any unexported fields. The following will return an error if used with Fx. If you have need of unexported fields on such a type, you may opt-into ignoring unexported fields by adding the ignore-unexported struct tag to the fx.In. For example,
Package secretsmanager provides the API client, operations, and parameter types for AWS Secrets Manager. Amazon Web Services Secrets Manager provides a service to enable you to store, manage, and retrieve, secrets. This guide provides descriptions of the Secrets Manager API. For more information about using this service, see the Amazon Web Services Secrets Manager User Guide. This version of the Secrets Manager API Reference documents the Secrets Manager API version 2017-10-17. For a list of endpoints, see Amazon Web Services Secrets Manager endpoints. We welcome your feedback. Send your comments to awssecretsmanager-feedback@amazon.com, or post your feedback and questions in the Amazon Web Services Secrets Manager Discussion Forum. For more information about the Amazon Web Services Discussion Forums, see Forums Help. Amazon Web Services Secrets Manager supports Amazon Web Services CloudTrail, a service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. By using information that's collected by Amazon Web Services CloudTrail, you can determine the requests successfully made to Secrets Manager, who made the request, when it was made, and so on. For more about Amazon Web Services Secrets Manager and support for Amazon Web Services CloudTrail, see Logging Amazon Web Services Secrets Manager Events with Amazon Web Services CloudTrailin the Amazon Web Services Secrets Manager User Guide. To learn more about CloudTrail, including enabling it and find your log files, see the Amazon Web Services CloudTrail User Guide.
Package attributevalue provides marshaling and unmarshaling utilities to convert between Go types and Amazon DynamoDB AttributeValues. These utilities allow you to marshal slices, maps, structs, and scalar values to and from AttributeValue type. These utilities make it easier to convert between AttributeValue and Go types when working with DynamoDB resources. This package only converts between Go types and DynamoDB AttributeValue. See the feature/dynamodbstreams/attributevalue package for converting to DynamoDBStreams AttributeValue types. The FromDynamoStreamsDBMap, FromDynamoStreamsDBList, and FromDynamoDBStreams functions provide the conversion utilities to convert a DynamoDBStreams AttributeValue type to a DynamoDB AttributeValue type. Use these utilities when you need to convert the AttributeValue type between the two APIs. To marshal a Go type to an AttributeValue you can use the Marshal, MarshalList, and MarshalMap functions. The List and Map functions are specialized versions of the Marshal for serializing slices and maps of Attributevalues. The following example uses MarshalMap to convert a Go struct, Record to a AttributeValue. The AttributeValue value is then used as input to the PutItem operation call. To unmarshal an AttributeValue to a Go type you can use the Unmarshal, UnmarshalList, UnmarshalMap, and UnmarshalListOfMaps functions. The List and Map functions are specialized versions of the Unmarshal function for unmarshal slices and maps of Attributevalues. The following example will unmarshal Items result from the DynamoDB's Scan API operation. The Items returned will be unmarshaled into the slice of the Records struct. The AttributeValue Marshal and Unmarshal functions support the `dynamodbav` struct tag by default. Additional tags can be enabled with the EncoderOptions and DecoderOptions, TagKey option. See the Marshal and Unmarshal function for information on how struct tags and fields are marshaled and unmarshaled.
Package sns provides the API client, operations, and parameter types for Amazon Simple Notification Service. Amazon Simple Notification Service (Amazon SNS) is a web service that enables you to build distributed web-enabled applications. Applications can use Amazon SNS to easily push real-time notification messages to interested subscribers over multiple delivery protocols. For more information about this product see the Amazon SNS product page. For detailed information about Amazon SNS features and their associated API calls, see the Amazon SNS Developer Guide. For information on the permissions you need to use this API, see Identity and access management in Amazon SNS in the Amazon SNS Developer Guide. We also provide SDKs that enable you to access Amazon SNS from your preferred programming language. The SDKs contain functionality that automatically takes care of tasks such as: cryptographically signing your service requests, retrying requests, and handling error responses. For a list of available SDKs, go to Tools for Amazon Web Services.
Package dig provides an opinionated way of resolving object dependencies. STABLE. No breaking changes will be made in this major version. Dig exposes type Container as an object capable of resolving a directed acyclic dependency graph. Use the New function to create one. Constructors for different types are added to the container by using the Provide method. A constructor can declare a dependency on another type by simply adding it as a function parameter. Dependencies for a type can be added to the graph both, before and after the type was added. Multiple constructors can rely on the same type. The container creates a singleton for each retained type, instantiating it at most once when requested directly or as a dependency of another type. Constructors can declare any number of dependencies as parameters and optionally, return errors. Constructors can also return multiple results to add multiple types to the container. Constructors that accept a variadic number of arguments are treated as if they don't have those arguments. That is, Is treated the same as, The constructor will be called with all other dependencies and no variadic arguments. Types added to the container may be consumed by using the Invoke method. Invoke accepts any function that accepts one or more parameters and optionally, returns an error. Dig calls the function with the requested type, instantiating only those types that were requested by the function. The call fails if any type or its dependencies (both direct and transitive) were not available in the container. Any error returned by the invoked function is propagated back to the caller. Constructors declare their dependencies as function parameters. This can very quickly become unreadable if the constructor has a lot of dependencies. A pattern employed to improve readability in a situation like this is to create a struct that lists all the parameters of the function as fields and changing the function to accept that struct instead. This is referred to as a parameter object. Dig has first class support for parameter objects: any struct embedding dig.In gets treated as a parameter object. The following is equivalent to the constructor above. Handlers can receive any combination of parameter objects and parameters. Result objects are the flip side of parameter objects. These are structs that represent multiple outputs from a single function as fields in the struct. Structs embedding dig.Out get treated as result objects. The above is equivalent to, Constructors often don't have a hard dependency on some types and are able to operate in a degraded state when that dependency is missing. Dig supports declaring dependencies as optional by adding an `optional:"true"` tag to fields of a dig.In struct. Fields in a dig.In structs that have the `optional:"true"` tag are treated as optional by Dig. If an optional field is not available in the container, the constructor will receive a zero value for the field. Constructors that declare dependencies as optional MUST handle the case of those dependencies being absent. The optional tag also allows adding new dependencies without breaking existing consumers of the constructor. Some use cases call for multiple values of the same type. Dig allows adding multiple values of the same type to the container with the use of Named Values. Named Values can be produced by passing the dig.Name option when a constructor is provided. All values produced by that constructor will have the given name. Given the following constructors, You can provide *sql.DB into a Container under different names by passing the dig.Name option. Alternatively, you can produce a dig.Out struct and tag its fields with `name:".."` to have the corresponding value added to the graph under the specified name. Regardless of how a Named Value was produced, it can be consumed by another constructor by accepting a dig.In struct which has exported fields with the same name AND type that you provided. The name tag may be combined with the optional tag to declare the dependency optional. Added in Dig 1.2. Dig provides value groups to allow producing and consuming many values of the same type. Value groups allow constructors to send values to a named, unordered collection in the container. Other constructors can request all values in this collection as a slice. Constructors can send values into value groups by returning a dig.Out struct tagged with `group:".."`. Any number of constructors may provide values to this named collection. Other constructors can request all values for this collection by requesting a slice tagged with `group:".."`. This will execute all constructors that provide a value to that group in an unspecified order. Note that values in a value group are unordered. Dig makes no guarantees about the order in which these values will be produced. Value groups can be used to provide multiple values for a group from a dig.Out using slices, however considering groups are retrieved by requesting a slice this implies that the values must be retrieved using a slice of slices. As of dig v1.9.0, if you want to provide individual elements to the group instead of the slice itself, you can add the `flatten` modifier to the group from a dig.Out.
Package bluemonday provides a way of describing an allowlist of HTML elements and attributes as a policy, and for that policy to be applied to untrusted strings from users that may contain markup. All elements and attributes not on the allowlist will be stripped. The default bluemonday.UGCPolicy().Sanitize() turns this: Into the more harmless: And it turns this: Into this: Whilst still allowing this: To pass through mostly unaltered (it gained a rel="nofollow"): The primary purpose of bluemonday is to take potentially unsafe user generated content (from things like Markdown, HTML WYSIWYG tools, etc) and make it safe for you to put on your website. It protects sites against XSS (http://en.wikipedia.org/wiki/Cross-site_scripting) and other malicious content that a user interface may deliver. There are many vectors for an XSS attack (https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet) and the safest thing to do is to sanitize user input against a known safe list of HTML elements and attributes. Note: You should always run bluemonday after any other processing. If you use blackfriday (https://github.com/russross/blackfriday) or Pandoc (http://johnmacfarlane.net/pandoc/) then bluemonday should be run after these steps. This ensures that no insecure HTML is introduced later in your process. bluemonday is heavily inspired by both the OWASP Java HTML Sanitizer (https://code.google.com/p/owasp-java-html-sanitizer/) and the HTML Purifier (http://htmlpurifier.org/). We ship two default policies, one is bluemonday.StrictPolicy() and can be thought of as equivalent to stripping all HTML elements and their attributes as it has nothing on its allowlist. The other is bluemonday.UGCPolicy() and allows a broad selection of HTML elements and attributes that are safe for user generated content. Note that this policy does not allow iframes, object, embed, styles, script, etc. The essence of building a policy is to determine which HTML elements and attributes are considered safe for your scenario. OWASP provide an XSS prevention cheat sheet ( https://www.google.com/search?q=xss+prevention+cheat+sheet ) to help explain the risks, but essentially:
test.go is a "Go script" for running Vitess tests. It runs each test in its own Docker container for hermeticity and (potentially) parallelism. If a test fails, this script will save the output in _test/ and continue with other tests. Before using it, you should have Docker 1.5+ installed, and have your user in the group that lets you run the docker command without sudo. The first time you run against a given flavor, it may take some time for the corresponding bootstrap image (vitess/bootstrap:<flavor>) to be downloaded. It is meant to be run from the Vitess root, like so: For a list of options, run:
Package serial is a cross-platform serial library for the go language. The canonical import for this library is go.bug.st/serial.v1 so the import line is the following: It is possible to get the list of available serial ports with the GetPortsList function: The serial port can be opened with the Open function: The Open function needs a "mode" parameter that specifies the configuration options for the serial port. If not specified the default options are 9600_N81, in the example above only the speed is changed so the port is opened using 115200_N81. The following snippets shows how to declare a configuration for 57600_E71: The configuration can be changed at any time with the SetMode function: The port object implements the io.ReadWriteCloser interface, so we can use the usual Read, Write and Close functions to send and receive data from the serial port: If a port is a virtual USB-CDC serial port (for example an USB-to-RS232 cable or a microcontroller development board) is possible to retrieve the USB metadata, like VID/PID or USB Serial Number, with the GetDetailedPortsList function in the enumerator package: for details on USB port enumeration see the documentation of the specific package. This library tries to avoid the use of the "C" package (and consequently the need of cgo) to simplify cross compiling. Unfortunately the USB enumeration package for darwin (MacOSX) requires cgo to access the IOKit framework. This means that if you need USB enumeration on darwin you're forced to use cgo. This example prints the list of serial ports and use the first one to send a string "10,20,30" and prints the response on the screen.
Package lambda provides the API client, operations, and parameter types for AWS Lambda. Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. With Lambda, you can run code for virtually any type of application or backend service. For more information about the Lambda service, see What is Lambdain the Lambda Developer Guide. The Lambda API Reference provides information about each of the API methods, including details about the parameters in each API request and response. You can use Software Development Kits (SDKs), Integrated Development Environment (IDE) Toolkits, and command line tools to access the API. For installation instructions, see Tools for Amazon Web Services. For a list of Region-specific endpoints that Lambda supports, see Lambda endpoints and quotas in the Amazon Web Services General Reference.. When making the API calls, you will need to authenticate your request by providing a signature. Lambda supports signature version 4. For more information, see Signature Version 4 signing processin the Amazon Web Services General Reference.. Because Amazon Web Services SDKs use the CA certificates from your computer, changes to the certificates on the Amazon Web Services servers can cause connection failures when you attempt to use an SDK. You can prevent these failures by keeping your computer's CA certificates and operating system up-to-date. If you encounter this issue in a corporate environment and do not manage your own computer, you might need to ask an administrator to assist with the update process. The following list shows minimum operating system and Java versions: Microsoft Windows versions that have updates from January 2005 or later installed contain at least one of the required CAs in their trust list. Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5 (February 2007), Mac OS X 10.5 (October 2007), and later versions contain at least one of the required CAs in their trust list. Red Hat Enterprise Linux 5 (March 2007), 6, and 7 and CentOS 5, 6, and 7 all contain at least one of the required CAs in their default trusted CA list. Java 1.4.2_12 (May 2006), 5 Update 2 (March 2005), and all later versions, including Java 6 (December 2006), 7, and 8, contain at least one of the required CAs in their default trusted CA list. When accessing the Lambda management console or Lambda API endpoints, whether through browsers or programmatically, you will need to ensure your client machines support any of the following CAs: Amazon Root CA 1 Starfield Services Root Certificate Authority - G2 Starfield Class 2 Certification Authority Root certificates from the first two authorities are available from Amazon trust services, but keeping your computer up-to-date is the more straightforward solution. To learn more about ACM-provided certificates, see Amazon Web Services Certificate Manager FAQs.
Package pubsublite provides an easy way to publish and receive messages using the Pub/Sub Lite service. Google Pub/Sub services are designed to provide reliable, many-to-many, asynchronous messaging between applications. Publisher applications can send messages to a topic and other applications can subscribe to that topic to receive the messages. By decoupling senders and receivers, Google Pub/Sub allows developers to communicate between independently written applications. Compared to Cloud Pub/Sub, Pub/Sub Lite provides partitioned data storage with predefined throughput and storage capacity. Guidance on how to choose between Cloud Pub/Sub and Pub/Sub Lite is available at https://cloud.google.com/pubsub/docs/choosing-pubsub-or-lite. More information about Pub/Sub Lite is available at https://cloud.google.com/pubsub/lite. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Examples can be found at https://pkg.go.dev/cloud.google.com/go/pubsublite#pkg-examples and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#pkg-examples. Complete sample programs can be found at https://github.com/GoogleCloudPlatform/golang-samples/tree/master/pubsublite. The cloud.google.com/go/pubsublite/pscompat subpackage contains clients for publishing and receiving messages, which have similar interfaces to their pubsub.Topic and pubsub.Subscription counterparts in cloud.google.com/go/pubsub. The following examples demonstrate how to declare common interfaces: https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewPublisherClient-Interface and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewSubscriberClient-Interface. The following imports are required for code snippets below: Messages are published to topics. Pub/Sub Lite topics may be created like so: Close must be called to release resources when an AdminClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/topics for more information about how Pub/Sub Lite topics are configured. See https://cloud.google.com/pubsub/lite/docs/locations for the list of locations where Pub/Sub Lite is available. Pub/Sub Lite uses gRPC streams extensively for high throughput. For more differences, see https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat. To publish messages to a topic, first create a PublisherClient: Then call Publish: Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub Lite service. Thresholds for batching can be configured in PublishSettings. Publish returns a PublishResult, which behaves like a future; its Get method blocks until the message has been sent (or has failed to be sent) to the service: Once you've finishing publishing all messages, call Stop to flush all messages to the service and close gRPC streams. The PublisherClient can no longer be used after it has been stopped or has terminated due to a permanent error. PublisherClients are expected to be long-lived and used for the duration of the application, rather than for publishing small batches of messages. Stop must be called to release resources when a PublisherClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/publishing for more information about publishing. To receive messages published to a topic, create a subscription to the topic. There may be more than one subscription per topic; each message that is published to the topic will be delivered to all of its subscriptions. Pub/Sub Lite subscriptions may be created like so: See https://cloud.google.com/pubsub/lite/docs/subscriptions for more information about how subscriptions are configured. To receive messages for a subscription, first create a SubscriberClient: Messages are then consumed from a subscription via callback. The callback may be invoked concurrently by multiple goroutines (one per partition that the subscriber client is connected to). Receive blocks until either the context is canceled or a permanent error occurs. To terminate a call to Receive, cancel its context: Clients must call pubsub.Message.Ack() or pubsub.Message.Nack() for every message received. Pub/Sub Lite does not have ACK deadlines. Pub/Sub Lite also does not actually have the concept of NACK. The default behavior terminates the SubscriberClient. In Pub/Sub Lite, only a single subscriber for a given subscription is connected to any partition at a time, and there is no other client that may be able to handle messages. See https://cloud.google.com/pubsub/lite/docs/subscribing for more information about receiving messages. Pub/Sub Lite utilizes gRPC streams extensively. gRPC allows a maximum of 100 streams per connection. Internally, the library uses a default connection pool size of 8, which supports up to 800 topic partitions. To alter the connection pool size, pass a ClientOption to pscompat.NewPublisherClient and pscompat.NewSubscriberClient:
Package rds provides the API client, operations, and parameter types for Amazon Relational Database Service. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, Db2, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference For the alphabetical list of API actions, see API Actions. For the alphabetical list of data types, see Data Types. For a list of common query parameters, see Common Parameters. For descriptions of the error codes, see Common Errors. Amazon RDS User Guide For a summary of the Amazon RDS interfaces, see Available RDS Interfaces. For more information about how to use the Query API, see Using the Query API.
Package rod is a high-level driver directly based on DevTools Protocol. This example opens https://github.com/, searches for "git", and then gets the header element which gives the description for Git. Rod use https://golang.org/pkg/context to handle cancellations for IO blocking operations, most times it's timeout. Context will be recursively passed to all sub-methods. For example, methods like Page.Context(ctx) will return a clone of the page with the ctx, all the methods of the returned page will use the ctx if they have IO blocking operations. Page.Timeout or Page.WithCancel is just a shortcut for Page.Context. Of course, Browser or Element works the same way. Shows how we can further customize the browser with the launcher library. Usually you use launcher lib to set the browser's command line flags (switches). Doc for flags: https://peter.sh/experiments/chromium-command-line-switches Shows how to change the retry/polling options that is used to query elements. This is useful when you want to customize the element query retry logic. When rod doesn't have a feature that you need. You can easily call the cdp to achieve it. List of cdp API: https://github.com/go-rod/rod/tree/main/lib/proto Shows how to disable headless mode and debug. Rod provides a lot of debug options, you can set them with setter methods or use environment variables. Doc for environment variables: https://pkg.go.dev/github.com/go-rod/rod/lib/defaults We use "Must" prefixed functions to write example code. But in production you may want to use the no-prefix version of them. About why we use "Must" as the prefix, it's similar to https://golang.org/pkg/regexp/#MustCompile Shows how to share a remote object reference between two Eval. Shows how to listen for events. Shows how to intercept requests and modify both the request and the response. The entire process of hijacking one request: The --req-> and --res-> are the parts that can be modified. Show how to handle multiple results of an action. Such as when you login a page, the result can be success or wrong password. Example_search shows how to use Search to get element inside nested iframes or shadow DOMs. It works the same as https://developers.google.com/web/tools/chrome-devtools/dom#search Shows how to update the state of the current page. In this example we enable the network domain. Rod uses mouse cursor to simulate clicks, so if a button is moving because of animation, the click may not work as expected. We usually use WaitStable to make sure the target isn't changing anymore. When you want to wait for an ajax request to complete, this example will be useful.
ps provides an API for finding and listing processes in a platform-agnostic way. NOTE: If you're reading these docs online via GoDocs or some other system, you might only see the Unix docs. This project makes heavy use of platform-specific implementations. We recommend reading the source if you are interested.
Package seelog implements logging functionality with flexible dispatching, filtering, and formatting. To create a logger, use one of the following constructors: Example: The "defer" line is important because if you are using asynchronous logger behavior, without this line you may end up losing some messages when you close your application because they are processed in another non-blocking goroutine. To avoid that you explicitly defer flushing all messages before closing. Logger created using one of the LoggerFrom* funcs can be used directly by calling one of the main log funcs. Example: Having loggers as variables is convenient if you are writing your own package with internal logging or if you have several loggers with different options. But for most standalone apps it is more convenient to use package level funcs and vars. There is a package level var 'Current' made for it. You can replace it with another logger using 'ReplaceLogger' and then use package level funcs: Last lines do the same as In this example the 'Current' logger was replaced using a 'ReplaceLogger' call and became equal to 'logger' variable created from config. This way you are able to use package level funcs instead of passing the logger variable. Main seelog point is to configure logger via config files and not the code. The configuration is read by LoggerFrom* funcs. These funcs read xml configuration from different sources and try to create a logger using it. All the configuration features are covered in detail in the official wiki: https://github.com/cihub/seelog/wiki. There are many sections covering different aspects of seelog, but the most important for understanding configs are: After you understand these concepts, check the 'Reference' section on the main wiki page to get the up-to-date list of dispatchers, receivers, formats, and logger types. Here is an example config with all these features: This config represents a logger with adaptive timeout between log messages (check logger types reference) which logs to console, all.log, and errors.log depending on the log level. Its output formats also depend on log level. This logger will only use log level 'debug' and higher (minlevel is set) for all files with names that don't start with 'test'. For files starting with 'test' this logger prohibits all levels below 'error'. Although configuration using code is not recommended, it is sometimes needed and it is possible to do with seelog. Basically, what you need to do to get started is to create constraints, exceptions and a dispatcher tree (same as with config). Most of the New* functions in this package are used to provide such capabilities. Here is an example of configuration in code, that demonstrates an async loop logger that logs to a simple split dispatcher with a console receiver using a specified format and is filtered using a top-level min-max constraints and one expection for the 'main.go' file. So, this is basically a demonstration of configuration of most of the features: To learn seelog features faster you should check the examples package: https://github.com/cihub/seelog-examples It contains many example configs and usecases.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package kstatus contains libraries for computing status of kubernetes resources. Status Get status and/or conditions for resources based on resources already read from a cluster, i.e. it will not fetch resources from a cluster. Wait Get status and/or conditions for resources by fetching them from a cluster. This supports specifying a set of resources as an Inventory or as a list of manifests/unstructureds. This also supports polling the state of resources until they all reach a specific status. A common use case for this can be to wait for a set of resources to all finish reconciling after an apply.
Package cloudtrail provides the API client, operations, and parameter types for AWS CloudTrail. This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail. CloudTrail is a web service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the Amazon Web Services API call, the source IP address, the request parameters, and the response elements returned by the service. As an alternative to the API, you can use one of the Amazon Web Services SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide programmatic access to CloudTrail. For example, the SDKs handle cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools to Build on Amazon Web Services. See the CloudTrail User Guide for information about the data that is included with each Amazon Web Services API call listed in the log files.
Package redis is a client for the Redis database. The Redigo FAQ (https://github.com/gomodule/redigo/wiki/FAQ) contains more documentation about this package. The Conn interface is the primary interface for working with Redis. Applications create connections by calling the Dial, DialWithTimeout or NewConn functions. In the future, functions will be added for creating sharded and other types of connections. The application must call the connection Close method when the application is done with the connection. The Conn interface has a generic method for executing Redis commands: The Redis command reference (http://redis.io/commands) lists the available commands. An example of using the Redis APPEND command is: The Do method converts command arguments to bulk strings for transmission to the server as follows: Redis command reply types are represented using the following Go types: Use type assertions or the reply helper functions to convert from interface{} to the specific Go type for the command result. Connections support pipelining using the Send, Flush and Receive methods. Send writes the command to the connection's output buffer. Flush flushes the connection's output buffer to the server. Receive reads a single reply from the server. The following example shows a simple pipeline. The Do method combines the functionality of the Send, Flush and Receive methods. The Do method starts by writing the command and flushing the output buffer. Next, the Do method receives all pending replies including the reply for the command just sent by Do. If any of the received replies is an error, then Do returns the error. If there are no errors, then Do returns the last reply. If the command argument to the Do method is "", then the Do method will flush the output buffer and receive pending replies without sending a command. Use the Send and Do methods to implement pipelined transactions. Connections support one concurrent caller to the Receive method and one concurrent caller to the Send and Flush methods. No other concurrency is supported including concurrent calls to the Do and Close methods. For full concurrent access to Redis, use the thread-safe Pool to get, use and release a connection from within a goroutine. Connections returned from a Pool have the concurrency restrictions described in the previous paragraph. Use the Send, Flush and Receive methods to implement Pub/Sub subscribers. The PubSubConn type wraps a Conn with convenience methods for implementing subscribers. The Subscribe, PSubscribe, Unsubscribe and PUnsubscribe methods send and flush a subscription management command. The receive method converts a pushed message to convenient types for use in a type switch. The Bool, Int, Bytes, String, Strings and Values functions convert a reply to a value of a specific type. To allow convenient wrapping of calls to the connection Do and Receive methods, the functions take a second argument of type error. If the error is non-nil, then the helper function returns the error. If the error is nil, the function converts the reply to the specified type: The Scan function converts elements of a array reply to Go types: Connection methods return error replies from the server as type redis.Error. Call the connection Err() method to determine if the connection encountered non-recoverable error such as a network error or protocol parsing error. If Err() returns a non-nil value, then the connection is not usable and should be closed. This example implements ZPOP as described at http://redis.io/topics/transactions using WATCH/MULTI/EXEC and scripting.
Package elasticsearchservice provides the API client, operations, and parameter types for Amazon Elasticsearch Service. Use the Amazon Elasticsearch Configuration API to create, configure, and manage Elasticsearch domains. For sample code that uses the Configuration API, see the Amazon Elasticsearch Service Developer Guide. The guide also contains sample code for sending signed HTTP requests to the Elasticsearch APIs. The endpoint for configuration service requests is region-specific: es.region.amazonaws.com. For example, es.us-east-1.amazonaws.com. For a current list of supported regions and endpoints, see Regions and Endpoints.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field and Cross Struct validation for nested structs. Validate A simple example usage: The error can be used like so Both StructErrors and FieldError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. NOTE2: comma is the default separator of validation tags, if you wish to have a comma included within the parameter i.e. excludesall=, you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package securityhub provides the API client, operations, and parameter types for AWS SecurityHub. Security Hub provides you with a comprehensive view of your security state in Amazon Web Services and helps you assess your Amazon Web Services environment against security industry standards and best practices. Security Hub collects security data across Amazon Web Services accounts, Amazon Web Services services, and supported third-party products and helps you analyze your security trends and identify the highest priority security issues. To help you manage the security state of your organization, Security Hub supports multiple security standards. These include the Amazon Web Services Foundational Security Best Practices (FSBP) standard developed by Amazon Web Services, and external compliance frameworks such as the Center for Internet Security (CIS), the Payment Card Industry Data Security Standard (PCI DSS), and the National Institute of Standards and Technology (NIST). Each standard includes several security controls, each of which represents a security best practice. Security Hub runs checks against security controls and generates control findings to help you assess your compliance against security best practices. In addition to generating control findings, Security Hub also receives findings from other Amazon Web Services services, such as Amazon GuardDuty and Amazon Inspector, and supported third-party products. This gives you a single pane of glass into a variety of security-related issues. You can also send Security Hub findings to other Amazon Web Services services and supported third-party products. Security Hub offers automation features that help you triage and remediate security issues. For example, you can use automation rules to automatically update critical findings when a security check fails. You can also leverage the integration with Amazon EventBridge to trigger automatic responses to specific findings. This guide, the Security Hub API Reference, provides information about the Security Hub API. This includes supported resources, HTTP methods, parameters, and schemas. If you're new to Security Hub, you might find it helpful to also review the Security Hub User Guide. The user guide explains key concepts and provides procedures that demonstrate how to use Security Hub features. It also provides information about topics such as integrating Security Hub with other Amazon Web Services services. In addition to interacting with Security Hub by making calls to the Security Hub API, you can use a current version of an Amazon Web Services command line tool or SDK. Amazon Web Services provides tools and SDKs that consist of libraries and sample code for various languages and platforms, such as PowerShell, Java, Go, Python, C++, and .NET. These tools and SDKs provide convenient, programmatic access to Security Hub and other Amazon Web Services services . They also handle tasks such as signing requests, managing errors, and retrying requests automatically. For information about installing and using the Amazon Web Services tools and SDKs, see Tools to Build on Amazon Web Services. With the exception of operations that are related to central configuration, Security Hub API requests are executed only in the Amazon Web Services Region that is currently active or in the specific Amazon Web Services Region that you specify in your request. Any configuration or settings change that results from the operation is applied only to that Region. To make the same change in other Regions, call the same API operation in each Region in which you want to apply the change. When you use central configuration, API requests for enabling Security Hub, standards, and controls are executed in the home Region and all linked Regions. For a list of central configuration operations, see the Central configuration terms and conceptssection of the Security Hub User Guide. The following throttling limits apply to Security Hub API operations. BatchEnableStandards - RateLimit of 1 request per second. BurstLimit of 1 request per second. GetFindings - RateLimit of 3 requests per second. BurstLimit of 6 requests per second. BatchImportFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. BatchUpdateFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. UpdateStandardsControl - RateLimit of 1 request per second. BurstLimit of 5 requests per second. All other operations - RateLimit of 10 requests per second. BurstLimit of 30 requests per second.
Package semver provides the ability to work with Semantic Versions (http://semver.org) in Go. Specifically it provides the ability to: There are two functions that can parse semantic versions. The `StrictNewVersion` function only parses valid version 2 semantic versions as outlined in the specification. The `NewVersion` function attempts to coerce a version into a semantic version and parse it. For example, if there is a leading v or a version listed without all 3 parts (e.g. 1.2) it will attempt to coerce it into a valid semantic version (e.g., 1.2.0). In both cases a `Version` object is returned that can be sorted, compared, and used in constraints. When parsing a version an optional error can be returned if there is an issue parsing the version. For example, The version object has methods to get the parts of the version, compare it to other versions, convert the version back into a string, and get the original string. For more details please see the documentation at https://godoc.org/github.com/Masterminds/semver. A set of versions can be sorted using the `sort` package from the standard library. For example, There are two methods for comparing versions. One uses comparison methods on `Version` instances and the other is using Constraints. There are some important differences to notes between these two methods of comparison. There are differences between the two methods or checking versions because the comparison methods on `Version` follow the specification while comparison ranges are not part of the specification. Different packages and tools have taken it upon themselves to come up with range rules. This has resulted in differences. For example, npm/js and Cargo/Rust follow similar patterns which PHP has a different pattern for ^. The comparison features in this package follow the npm/js and Cargo/Rust lead because applications using it have followed similar patters with their versions. Checking a version against version constraints is one of the most featureful parts of the package. There are two elements to the comparisons. First, a comparison string is a list of comma or space separated AND comparisons. These are then separated by || (OR) comparisons. For example, `">= 1.2 < 3.0.0 || >= 4.2.3"` is looking for a comparison that's greater than or equal to 1.2 and less than 3.0.0 or is greater than or equal to 4.2.3. This can also be written as `">= 1.2, < 3.0.0 || >= 4.2.3"` The basic comparisons are: There are multiple methods to handle ranges and the first is hyphens ranges. These look like: The `x`, `X`, and `*` characters can be used as a wildcard character. This works for all comparison operators. When used on the `=` operator it falls back to the tilde operation. For example, Tilde Range Comparisons (Patch) The tilde (`~`) comparison operator is for patch level ranges when a minor version is specified and major level changes when the minor number is missing. For example, Caret Range Comparisons (Major) The caret (`^`) comparison operator is for major level changes once a stable (1.0.0) release has occurred. Prior to a 1.0.0 release the minor versions acts as the API stability level. This is useful when comparisons of API versions as a major change is API breaking. For example, In addition to testing a version against a constraint, a version can be validated against a constraint. When validation fails a slice of errors containing why a version didn't meet the constraint is returned. For example,
Package semver provides the ability to work with Semantic Versions (http://semver.org) in Go. Specifically it provides the ability to: To parse a semantic version use the `NewVersion` function. For example, If there is an error the version wasn't parseable. The version object has methods to get the parts of the version, compare it to other versions, convert the version back into a string, and get the original string. For more details please see the documentation at https://godoc.org/github.com/Masterminds/semver. A set of versions can be sorted using the `sort` package from the standard library. For example, Checking a version against version constraints is one of the most featureful parts of the package. There are two elements to the comparisons. First, a comparison string is a list of comma separated and comparisons. These are then separated by || separated or comparisons. For example, `">= 1.2, < 3.0.0 || >= 4.2.3"` is looking for a comparison that's greater than or equal to 1.2 and less than 3.0.0 or is greater than or equal to 4.2.3. The basic comparisons are: There are multiple methods to handle ranges and the first is hyphens ranges. These look like: The `x`, `X`, and `*` characters can be used as a wildcard character. This works for all comparison operators. When used on the `=` operator it falls back to the pack level comparison (see tilde below). For example, Tilde Range Comparisons (Patch) The tilde (`~`) comparison operator is for patch level ranges when a minor version is specified and major level changes when the minor number is missing. For example, Caret Range Comparisons (Major) The caret (`^`) comparison operator is for major level changes. This is useful when comparisons of API versions as a major change is API breaking. For example,
Package ses provides the API client, operations, and parameter types for Amazon Simple Email Service. This document contains reference information for the Amazon Simple Email Service (Amazon SES) API, version 2010-12-01. This document is best used in conjunction with the Amazon SES Developer Guide. For a list of Amazon SES endpoints to use in service requests, see Regions and Amazon SES in the Amazon SES Developer Guide. This documentation contains reference information related to the following: Amazon SES API Actions Amazon SES API Data Types Common Parameters Common Errors
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package notify implements access to filesystem events. Notify is a high-level abstraction over filesystem watchers like inotify, kqueue, FSEvents, FEN or ReadDirectoryChangesW. Watcher implementations are split into two groups: ones that natively support recursive notifications (FSEvents and ReadDirectoryChangesW) and ones that do not (inotify, kqueue, FEN). For more details see watcher and recursiveWatcher interfaces in watcher.go source file. On top of filesystem watchers notify maintains a watchpoint tree, which provides a strategy for creating and closing filesystem watches and dispatching filesystem events to user channels. An event set is just an event list joint using bitwise OR operator into a single event value. Both the platform-independent (see Constants) and specific events can be used. Refer to the event_*.go source files for information about the available events. A filesystem watch or just a watch is platform-specific entity which represents a single path registered for notifications for specific event set. Setting a watch means using platform-specific API calls for creating / initializing said watch. For each watcher the API call is: To rewatch means to either shrink or expand an event set that was previously registered during watch operation for particular filesystem watch. A watchpoint is a list of user channel and event set pairs for particular path (watchpoint tree's node). A single watchpoint can contain multiple different user channels registered to listen for one or more events. A single user channel can be registered in one or more watchpoints, recursive and non-recursive ones as well.
Package mongodocstore provides a docstore implementation for MongoDB and MongoDB-compatible services hosted on-premise or by cloud providers, including Amazon DocumentDB and Azure Cosmos DB. For docstore.OpenCollection, mongodocstore registers for the scheme "mongo". The default URL opener will dial a Mongo server using the environment variable "MONGO_SERVER_URL". To customize the URL opener, or for more details on the URL format, see URLOpener. See https://gocloud.dev/concepts/urls/ for background information. mongodocstore uses the unordered BulkWrite call of the underlying driver for writes, and uses Find with a list of document IDs for Get. (These implementation choices are subject to change.) It calls the BeforeDo function once before each call to the underlying driver. The as function passed to the BeforeDo function exposes the following types: mongodocstore exposes the following types for As: MongoDB represents times to millisecond precision, while Go's time.Time type has nanosecond precision. To save time.Times to MongoDB without loss of precision, save the result of calling UnixNano on the time. The official Go driver for MongoDB, go.mongodb.org/mongo-driver/mongo, lowercases struct field names; other docstore drivers do not. This means that you have to choose between interoperating with the MongoDB driver and interoperating with other docstore drivers. See Options.LowercaseFields for more information.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package neptune provides the API client, operations, and parameter types for Amazon Neptune. Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Amazon Neptune supports popular graph models Property Graph and W3C's RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. This interface reference for Amazon Neptune contains documentation for a programming or command line interface you can use to manage Amazon Neptune. Note that Amazon Neptune is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide.
Package nutsdb implements a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transactions. And it also supports data structure such as list、set、sorted set etc. NutsDB currently works on Mac OS, Linux and Windows. NutsDB has the following main types: DB, BPTree, Entry, DataFile And Tx. and NutsDB supports bucket, A bucket is a collection of unique keys that are associated with values. All operations happen inside a Tx. Tx represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key , or iterate over a set of key-value pairs (prefix scanning or range scanning). read-write transactions can also update and delete keys from the DB. See the examples for more usage details.
Package list implements a doubly linked list. To iterate over a list (where l is a *List):
Package validator implements value validations based on struct tags. In code it is often necessary to validate that a given value is valid before using it for something. A typical example might be something like this. This is a simple enough example, but it can get significantly more complex, especially when dealing with structs. You get the idea. Package validator allows one to define valid values as struct tags when defining a new struct type. Then validating a variable of type NewUserRequest becomes trivial. Here is the list of validator functions builtin in the package. Note that there are no tests to prevent conflicting validator parameters. For instance, these fields will never be valid. It is possible to define custom validation functions by using SetValidationFunc. First, one needs to create a validation function. Then one needs to add it to the list of validation funcs and give it a "tag" name. Then it is possible to use the notzz validation tag. This will print "Field A error: value cannot be ZZ" To use parameters, it is very similar. And then the code below should print "Field A error: value cannot be ABC". As well, it is possible to overwrite builtin validation functions. And you can delete a validation function by setting it to nil. Using a non-existing validation func in a field tag will always return false and with error validate.ErrUnknownTag. Finally, package validator also provides a helper function that can be used to validate simple variables/values. In case there is a reason why one would not wish to use tag 'validate' (maybe due to a conflict with a different package), it is possible to tell the package to use a different tag. Then. SetTag is permanent. The new tag name will be used until it is again changed with a new call to SetTag. A way to temporarily use a different tag exists. You may often need to have a different set of validation rules for different situations. In all the examples above, we only used the default validator but you could create a new one and set specific rules for it. For instance, you might use the same struct to decode incoming JSON for a REST API but your needs will change when you're using it to, say, create a new instance in storage vs. when you need to change something. Maybe when creating a new user, you need to make sure all values in the struct are filled, but then you use the same struct to handle incoming requests to, say, change the password, in which case you only need the Username and the Password and don't care for the others. You might use two different validators. It is also possible to do all of that using only the default validator as long as SetTag is always called before calling validator.Validate() or you chain the with WithTag().
Package configservice provides the API client, operations, and parameter types for AWS Config. Config provides a way to keep track of the configurations of all the Amazon Web Services resources associated with your Amazon Web Services account. You can use Config to get the current and historical configurations of each Amazon Web Services resource and also to get information about the relationship between the resources. An Amazon Web Services resource can be an Amazon Compute Cloud (Amazon EC2) instance, an Elastic Block Store (EBS) volume, an elastic network Interface (ENI), or a security group. For a complete list of resources currently supported by Config, see Supported Amazon Web Services resources. You can access and manage Config through the Amazon Web Services Management Console, the Amazon Web Services Command Line Interface (Amazon Web Services CLI), the Config API, or the Amazon Web Services SDKs for Config. This reference guide contains documentation for the Config API and the Amazon Web Services CLI commands that you can use to manage Config. The Config API uses the Signature Version 4 protocol for signing requests. For more information about how to sign a request with this protocol, see Signature Version 4 Signing Process. For detailed information about Config features and their associated actions or commands, as well as how to work with Amazon Web Services Management Console, see What Is Configin the Config Developer Guide.