Package tview implements rich widgets for terminal based user interfaces. The widgets provided with this package are useful for data exploration and data entry. The package implements the following widgets: The package also provides Application which is used to poll the event queue and draw widgets on screen. The following is a very basic example showing a box with the title "Hello, world!": First, we create a box primitive with a border and a title. Then we create an application, set the box as its root primitive, and run the event loop. The application exits when the application's Application.Stop function is called or when Ctrl-C is pressed. You will find more demos in the "demos" subdirectory. It also contains a presentation (written using tview) which gives an overview of the different widgets and how they can be used. Throughout this package, styles are specified using the tcell.Style type. Styles specify colors with the tcell.Color type. Functions such as tcell.GetColor, tcell.NewHexColor, and tcell.NewRGBColor can be used to create colors from W3C color names or RGB values. The tcell.Style type also allows you to specify text attributes such as "bold" or "underline" or a URL which some terminals use to display hyperlinks. Almost all strings which are displayed may contain style tags. A style tag's content is always wrapped in square brackets. In its simplest form, a style tag specifies the foreground color of the text. Colors in these tags are W3C color names or six hexadecimal digits following a hash tag. Examples: A style tag changes the style of the characters following that style tag. There is no style stack and no nesting of style tags. Style tags are used in almost everything from box titles, list text, form item labels, to table cells. In a TextView, this functionality has to be switched on explicitly. See the TextView documentation for more information. A style tag's full format looks like this: Each of the four fields can be left blank and trailing fields can be omitted. (Empty square brackets "[]", however, are not considered style tags.) Fields that are not specified will be left unchanged. A field with just a dash ("-") means "reset to default". You can specify the following flags to turn on certain attributes (some flags may not be supported by your terminal): Use uppercase letters to turn off the corresponding attribute, for example, "B" to turn off bold. Uppercase letters have no effect if the attribute was not previously set. Setting a URL allows you to turn a piece of text into a hyperlink in some terminals. Specify a dash ("-") to specify the end of the hyperlink. Hyperlinks must only contain single-byte characters (e.g. ASCII) and they may not contain bracket characters ("[" or "]"). Examples: In the rare event that you want to display a string such as "[red]" or "[#00ff1a]" without applying its effect, you need to put an opening square bracket before the closing square bracket. Note that the text inside the brackets will be matched less strictly than region or colors tags. I.e. any character that may be used in color or region tags will be recognized. Examples: You can use the Escape() function to insert brackets automatically where needed. When primitives are instantiated, they are initialized with colors taken from the global Styles variable. You may change this variable to adapt the look and feel of the primitives to your preferred style. Note that most terminals will not report information about their color theme. This package therefore does not support using the terminal's color theme. The default style is a dark theme and you must change the Styles variable to switch to a light (or other) theme. This package supports all unicode characters supported by your terminal. If your terminal supports mouse events, you can enable mouse support for your application by calling Application.EnableMouse. Note that this may interfere with your terminal's default mouse behavior. Mouse support is disabled by default. Many functions in this package are not thread-safe. For many applications, this is not an issue: If your code makes changes in response to key events, the corresponding callback function will execute in the main goroutine and thus will not cause any race conditions. (Exceptions to this are documented.) If you access your primitives from other goroutines, however, you will need to synchronize execution. The easiest way to do this is to call Application.QueueUpdate or Application.QueueUpdateDraw (see the function documentation for details): One exception to this is the io.Writer interface implemented by TextView. You can safely write to a TextView from any goroutine. See the TextView documentation for details. You can also call Application.Draw from any goroutine without having to wrap it in Application.QueueUpdate. And, as mentioned above, key event callbacks are executed in the main goroutine and thus should not use Application.QueueUpdate as that may lead to deadlocks. It is also not necessary to call Application.Draw from such callbacks as it will be called automatically. All widgets listed above contain the Box type. All of Box's functions are therefore available for all widgets, too. Please note that if you are using the functions of Box on a subclass, they will return a *Box, not the subclass. This is a Golang limitation. So while tview supports method chaining in many places, these chains must be broken when using Box's functions. Example: You will need to call Box.SetBorder separately: All widgets also implement the Primitive interface. The tview package's rendering is based on version 2 of https://github.com/gdamore/tcell. It uses types and constants from that package (e.g. colors, styles, and keyboard values).
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. To use an emulator with this library, you can set the SPANNER_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Spanner. You can then create and use a client as usual: Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may be aborted automatically by the backend and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Cloud Spanner STRUCT (aka STRUCT) values (https://cloud.google.com/spanner/docs/data-types#struct-type) can be represented by a Go struct value. A proto StructType is built from the field types and field tag information of the Go struct. If a field in the struct type definition has a "spanner:<field_name>" tag, then the value of the "spanner" key in the tag is used as the name for that field in the built StructType, otherwise the field name in the struct definition is used. To specify a field with an empty field name in a Cloud Spanner STRUCT type, use the `spanner:""` tag annotation against the corresponding field in the Go struct's type definition. A STRUCT value can contain STRUCT-typed and Array-of-STRUCT typed fields and these can be specified using named struct-typed and []struct-typed fields inside a Go struct. However, embedded struct fields are not allowed. Unexported struct fields are ignored. NULL STRUCT values in Cloud Spanner are typed. A nil pointer to a Go struct value can be used to specify a NULL STRUCT value of the corresponding StructType. Nil and empty slices of a Go STRUCT type can be used to specify NULL and empty array values respectively of the corresponding StructType. A slice of pointers to a Go struct type can be used to specify an array of NULL-able STRUCT values. Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package datastore provides a client for Google Cloud Datastore. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Entities are the unit of storage and are associated with a key. A key consists of an optional parent key, a string application ID, a string kind (also known as an entity type), and either a StringID or an IntID. A StringID is also known as an entity name or key name. It is valid to create a key with a zero StringID and a zero IntID; this is called an incomplete key, and does not refer to any saved entity. Putting an entity into the datastore under an incomplete key will cause a unique key to be generated for that entity, with a non-zero IntID. An entity's contents are a mapping from case-sensitive field names to values. Valid value types are: Slices of structs are valid, as are structs that contain slices. The Get and Put functions load and save an entity's contents. An entity's contents are typically represented by a struct pointer. Example code: GetMulti, PutMulti and DeleteMulti are batch versions of the Get, Put and Delete functions. They take a []*Key instead of a *Key, and may return a datastore.MultiError when encountering partial failure. Mutate generalizes PutMulti and DeleteMulti to a sequence of any Datastore mutations. It takes a series of mutations created with NewInsert, NewUpdate, NewUpsert and NewDelete and applies them. Datastore.Mutate uses non-transactional mode; if atomicity is required, use Transaction.Mutate instead. An entity's contents can be represented by a variety of types. These are typically struct pointers, but can also be any type that implements the PropertyLoadSaver interface. If using a struct pointer, you do not have to explicitly implement the PropertyLoadSaver interface; the datastore will automatically convert via reflection. If a struct pointer does implement PropertyLoadSaver then those methods will be used in preference to the default behavior for struct pointers. Struct pointers are more strongly typed and are easier to use; PropertyLoadSavers are more flexible. The actual types passed do not have to match between Get and Put calls or even across different calls to datastore. It is valid to put a *PropertyList and get that same entity as a *myStruct, or put a *myStruct0 and get a *myStruct1. Conceptually, any entity is saved as a sequence of properties, and is loaded into the destination value on a property-by-property basis. When loading into a struct pointer, an entity that cannot be completely represented (such as a missing field) will result in an ErrFieldMismatch error but it is up to the caller whether this error is fatal, recoverable or ignorable. By default, for struct pointers, all properties are potentially indexed, and the property name is the same as the field name (and hence must start with an upper case letter). Fields may have a `datastore:"name,options"` tag. The tag name is the property name, which must be one or more valid Go identifiers joined by ".", but may start with a lower case letter. An empty tag name means to just use the field name. A "-" tag name means that the datastore will ignore that field. The only valid options are "omitempty", "noindex" and "flatten". If the options include "omitempty" and the value of the field is an empty value, then the field will be omitted on Save. Empty values are defined as false, 0, a nil pointer, a nil interface value, the zero time.Time, and any empty slice or string. (Empty slices are never saved, even without "omitempty".) Other structs, including GeoPoint, are never considered empty. If options include "noindex" then the field will not be indexed. All fields are indexed by default. Strings or byte slices longer than 1500 bytes cannot be indexed; fields used to store long strings and byte slices must be tagged with "noindex" or they will cause Put operations to fail. For a nested struct field, the options may also include "flatten". This indicates that the immediate fields and any nested substruct fields of the nested struct should be flattened. See below for examples. To use multiple options together, separate them by a comma. The order does not matter. If the options is "" then the comma may be omitted. Example code: A field of slice type corresponds to a Datastore array property, except for []byte, which corresponds to a Datastore blob. Zero-length slice fields are not saved. Slice fields of length 1 or greater are saved as Datastore arrays. When a zero-length Datastore array is loaded into a slice field, the slice field remains unchanged. If a non-array value is loaded into a slice field, the result will be a slice with one element, containing the value. Loading a Datastore Null into a basic type (int, float, etc.) results in a zero value. Loading a Null into a slice of basic type results in a slice of size 1 containing the zero value. Loading a Null into a pointer field results in nil. Loading a Null into a field of struct type is an error. A struct field can be a pointer to a signed integer, floating-point number, string or bool. Putting a non-nil pointer will store its dereferenced value. Putting a nil pointer will store a Datastore Null property, unless the field is marked omitempty, in which case no property will be stored. Loading a Null into a pointer field sets the pointer to nil. Loading any other value allocates new storage with the value, and sets the field to point to it. If the struct contains a *datastore.Key field tagged with the name "__key__", its value will be ignored on Put. When reading the Entity back into the Go struct, the field will be populated with the *datastore.Key value used to query for the Entity. Example code: If the struct pointed to contains other structs, then the nested or embedded structs are themselves saved as Entity values. For example, given these definitions: then an Outer would have one property, Inner, encoded as an Entity value. Note: embedded struct fields must be named to be encoded as an Entity. For example, in case of a type Outer with an embedded field Inner: all the Inner struct fields will be treated as fields of Outer itself. If an outer struct is tagged "noindex" then all of its implicit flattened fields are effectively "noindex". If the Inner struct contains a *Key field with the name "__key__", like so: then the value of K will be used as the Key for Inner, represented as an Entity value in datastore. If any nested struct fields should be flattened, instead of encoded as Entity values, the nested struct field should be tagged with the "flatten" option. For example, given the following: an Outer's properties would be equivalent to those of: Note that the "flatten" option cannot be used for Entity value fields or PropertyLoadSaver implementers. The server will reject any dotted field names for an Entity value. An entity's contents can also be represented by any type that implements the PropertyLoadSaver interface. This type may be a struct pointer, but it does not have to be. The datastore package will call Load when getting the entity's contents, and Save when putting the entity's contents. Possible uses include deriving non-stored fields, verifying fields, or indexing a field only if its value is positive. Example code: The *PropertyList type implements PropertyLoadSaver, and can therefore hold an arbitrary entity's contents. If a type implements the PropertyLoadSaver interface, it may also want to implement the KeyLoader interface. The KeyLoader interface exists to allow implementations of PropertyLoadSaver to also load an Entity's Key into the Go type. This type may be a struct pointer, but it does not have to be. The datastore package will call LoadKey when getting the entity's contents, after calling Load. Example code: To load a Key into a struct which does not implement the PropertyLoadSaver interface, see the "Key Field" section above. Queries retrieve entities based on their properties or key's ancestry. Running a query yields an iterator of results: either keys or (key, entity) pairs. Queries are re-usable and it is safe to call Query.Run from concurrent goroutines. Iterators are not safe for concurrent use. Queries are immutable, and are either created by calling NewQuery, or derived from an existing query by calling a method like Filter or Order that returns a new query value. A query is typically constructed by calling NewQuery followed by a chain of zero or more such methods. These methods are: Example code: Client.RunInTransaction runs a function in a transaction. Example code: Pass the ReadOnly option to RunInTransaction if your transaction is used only for Get, GetMulti or queries. Read-only transactions are more efficient. This package supports the Cloud Datastore emulator, which is useful for testing and development. Environment variables are used to indicate that datastore traffic should be directed to the emulator instead of the production Datastore service. To install and set up the emulator and its environment variables, see the documentation at https://cloud.google.com/datastore/docs/tools/datastore-emulator. To use the emulator with this library, you can set the DATASTORE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Datastore. You can then create and use a client as usual:
Package swagger (2.0) provides a powerful interface to your API Contains an implementation of Swagger 2.0. It knows how to serialize, deserialize and validate swagger specifications. Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability. We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.Now in version 2.0, Swagger is more enabling than ever. And it's 100% open source software. More detailed documentation is available at https://goswagger.io. Install: The implementation also provides a number of command line tools to help working with swagger. Currently there is a spec validator tool: To generate a server for a swagger spec document: To generate a client for a swagger spec document: To generate a swagger spec document for a go application: There are several other sub commands available for the generate command You're free to add files to the directories the generated code lands in, but the files generated by the generator itself will be regenerated on following generation runs so any changes to those files will be lost. However extra files you create won't be lost so they are safe to use for customizing the application to your needs. To generate a server for a swagger spec document:
Package cap provides all the Linux Capabilities userspace library API bindings in native Go. Capabilities are a feature of the Linux kernel that allow fine grain permissions to perform privileged operations. Privileged operations are required to do irregular system level operations from code. You can read more about how Capabilities are intended to work here: This package supports native Go bindings for all the features described in that paper as well as supporting subsequent changes to the kernel for other styles of inheritable Capability. Some simple things you can do with this package are: The "cap" package operates with POSIX semantics for security state. That is all OS threads are kept in sync at all times. The package "kernel.org/pub/linux/libs/security/libcap/psx" is used to implement POSIX semantics system calls that manipulate thread state uniformly over the whole Go (and any CGo linked) process runtime. Note, if the Go runtime syscall interface contains the Linux variant syscall.AllThreadsSyscall() API (it debuted in go1.16 see https://github.com/golang/go/issues/1435 for its history) then the "libcap/psx" package will use that to invoke Capability setting system calls in pure Go binaries. With such an enhanced Go runtime, to force this behavior, use the CGO_ENABLED=0 environment variable. POSIX semantics are more secure than trying to manage privilege at a thread level when those threads share a common memory image as they do under Linux: it is trivial to exploit a vulnerability in one thread of a process to cause execution on any another thread. So, any imbalance in security state, in such cases will readily create an opportunity for a privilege escalation vulnerability. POSIX semantics also work well with Go, which deliberately tries to insulate the user from worrying about the number of OS threads that are actually running in their program. Indeed, Go can efficiently launch and manage tens of thousands of concurrent goroutines without bogging the program or wider system down. It does this by aggressively migrating idle threads to make progress on unblocked goroutines. So, inconsistent security state across OS threads can also lead to program misbehavior. The only exception to this process-wide common security state is the cap.Launcher related functionality. This briefly locks an OS thread to a goroutine in order to launch another executable - the robust implementation of this kind of support is quite subtle, so please read its documentation carefully, if you find that you need it. See https://sites.google.com/site/fullycapable/ for recent updates, some more complete walk-through examples of ways of using 'cap.Set's etc and information on how to file bugs. Copyright (c) 2019-21 Andrew G. Morgan <morgan@kernel.org> The cap and psx packages are licensed with a (you choose) BSD 3-clause or GPL2. See LICENSE file for details.
Package hcl contains the main modelling types and general utility functions for HCL. For a simple entry point into HCL, see the package in the subdirectory "hclsimple", which has an opinionated function Decode that can decode HCL configurations in either native HCL syntax or JSON syntax into a Go struct type: If your application needs more control over the evaluation of the configuration, you can use the functions in the subdirectories hclparse, gohcl, hcldec, etc. Splitting the handling of configuration into multiple phases allows for advanced patterns such as allowing expressions in one part of the configuration to refer to data defined in another part.
Package chromedp is a high level Chrome DevTools Protocol client that simplifies driving browsers for scraping, unit testing, or profiling web pages using the CDP. chromedp requires no third-party dependencies, implementing the async Chrome DevTools Protocol entirely in Go. This package includes a number of simple examples. Additionally, chromedp/examples contains more complex examples.
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/main/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies, like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/event) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://pkg.go.dev/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. This example creates a simple application Controller that is configured for ExampleCRDWithConfigMapRef CRD. Any change in the configMap referenced in this Custom Resource will cause the re-reconcile of the parent ExampleCRDWithConfigMapRef due to the implementation of the .Watches method of "sigs.k8s.io/controller-runtime/pkg/builder".Builder. This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/component-base/blob/master/config/v1alpha1/defaults.go * defaultLeaseDuration = 15 * time.Second * defaultRenewDeadline = 10 * time.Second * defaultRetryPeriod = 2 * time.Second * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application.
Package easyjson contains marshaler/unmarshaler interfaces and helper functions.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Package stdouttrace contains an OpenTelemetry exporter for tracing telemetry to be written to an output destination as JSON.
Package zipkin contains an OpenTelemetry tracing exporter for Zipkin.
Package logging contains a Cloud Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Use a Client to interact with the Cloud Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Cloud Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Cloud Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. For cases when runtime environment supports out-of-process log ingestion, like logging agent, you can opt-in to write log entries to io.Writer instead of ingesting them to Cloud Logging service. Usually, you will use os.Stdout or os.Stderr as writers because Google Cloud logging agents are configured to capture logs from standard output. The entries will be Jsonified and wrote as one line strings following the structured logging format. See https://cloud.google.com/logging/docs/structured-logging#special-payload-fields for the format description. To instruct Logger to redirect log entries add RedirectAsJSON() LoggerOption`s. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: If you have proto.Message and want to send it as a protobuf payload, marshal it to anypb.Any: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Cloud logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResource type and labels. - A child entry's timestamp must be within the time interval covered by the parent request. (i.e., before the parent.Timestamp and after the parent.Timestamp - parent.HTTPRequest.Latency. This assumes the parent.Timestamp marks the end of the request.) - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself. You can automatically populate the Trace, SpanID, and TraceSampled fields of an Entry object by providing an http.Request object within the Entry's HTTPRequest field: When Entry with an http.Request is logged, its Trace, SpanID, and TraceSampled fields may be automatically populated as follows: Note that if Trace, SpanID, or TraceSampled are explicitly provided within an Entry object, then those values take precedence over values automatically extracted values.
Package consumer contains interfaces that receive and process data.
Package netlink provides a simple library for netlink. Netlink is the interface a user-space program in linux uses to communicate with the kernel. It can be used to add and remove interfaces, set up ip addresses and routes, and confiugre ipsec. Netlink communication requires elevated privileges, so in most cases this code needs to be run as root. The low level primitives for netlink are contained in the nl subpackage. This package attempts to provide a high-level interface that is loosly modeled on the iproute2 cli.
Package netlink provides a simple library for netlink. Netlink is the interface a user-space program in linux uses to communicate with the kernel. It can be used to add and remove interfaces, set up ip addresses and routes, and confiugre ipsec. Netlink communication requires elevated privileges, so in most cases this code needs to be run as root. The low level primitives for netlink are contained in the nl subpackage. This package attempts to provide a high-level interface that is loosly modeled on the iproute2 cli.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values. There are two implementations; those suffixed with 'G' are generics, usable for any type, and require a passed-in "less" function to define their ordering. Those without this prefix are specific to the 'Item' interface, and use its 'Less' function for ordering.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include '<', '<=', '>', '>=', '==', 'in', 'array-contains', and 'array-contains-any'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Firestore supports similarity search over embedding vectors. See Query.FindNearest for details. You can partition the documents of a Collection Group allowing for smaller subqueries. You can also Serialize/Deserialize queries making it possible to run/stream the queries elsewhere; another process or machine for instance. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it. This package supports the Cloud Firestore emulator, which is useful for testing and development. Environment variables are used to indicate that Firestore traffic should be directed to the emulator instead of the production Firestore service. To install and run the emulator and its environment variables, see the documentation at https://cloud.google.com/sdk/gcloud/reference/beta/emulators/firestore/. Once the emulator is running, set FIRESTORE_EMULATOR_HOST to the API endpoint.
Package lint contains a linter for Go source code.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package sprig provides template functions for Go. This package contains a number of utility functions for working with data inside of Go `html/template` and `text/template` files. To add these functions, use the `template.Funcs()` method: Note that you should add the function map before you parse any template files. See http://masterminds.github.io/sprig/ for more detailed documentation on each of the available functions.
Package otlpmetricgrpc provides an OTLP metrics exporter using gRPC. By default the telemetry is sent to https://localhost:4317. Exporter should be created using New and used with a metric.PeriodicReader. The environment variables described below can be used for configuration. OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT (default: "https://localhost:4317") - target to which the exporter sends telemetry. The target syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md. The value must contain a scheme ("http" or "https") and host. The value may additionally contain a port, and a path. The value should not contain a query string or fragment. OTEL_EXPORTER_OTLP_METRICS_ENDPOINT takes precedence over OTEL_EXPORTER_OTLP_ENDPOINT. The configuration can be overridden by WithEndpoint, WithEndpointURL, WithInsecure, and WithGRPCConn options. OTEL_EXPORTER_OTLP_INSECURE, OTEL_EXPORTER_OTLP_METRICS_INSECURE (default: "false") - setting "true" disables client transport security for the exporter's gRPC connection. You can use this only when an endpoint is provided without the http or https scheme. OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT setting overrides the scheme defined via OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT. OTEL_EXPORTER_OTLP_METRICS_INSECURE takes precedence over OTEL_EXPORTER_OTLP_INSECURE. The configuration can be overridden by WithInsecure, WithGRPCConn options. OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_METRICS_HEADERS (default: none) - key-value pairs used as gRPC metadata associated with gRPC requests. The value is expected to be represented in a format matching the W3C Baggage HTTP Header Content Format, except that additional semi-colon delimited metadata is not supported. Example value: "key1=value1,key2=value2". OTEL_EXPORTER_OTLP_METRICS_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS. The configuration can be overridden by WithHeaders option. OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_METRICS_TIMEOUT (default: "10000") - maximum time in milliseconds the OTLP exporter waits for each batch export. OTEL_EXPORTER_OTLP_METRICS_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT. The configuration can be overridden by WithTimeout option. OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_METRICS_COMPRESSION (default: none) - the gRPC compressor the exporter uses. Supported value: "gzip". OTEL_EXPORTER_OTLP_METRICS_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION. The configuration can be overridden by WithCompressor, WithGRPCConn options. OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE (default: none) - the filepath to the trusted certificate to use when verifying a server's TLS credentials. OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE. The configuration can be overridden by WithTLSCredentials, WithGRPCConn options. OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE (default: none) - the filepath to the client certificate/chain trust for client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE. The configuration can be overridden by WithTLSCredentials, WithGRPCConn options. OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY (default: none) - the filepath to the client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY. The configuration can be overridden by WithTLSCredentials, WithGRPCConn option. OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE (default: "cumulative") - aggregation temporality to use on the basis of instrument kind. Supported values: The configuration can be overridden by WithTemporalitySelector option. OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION (default: "explicit_bucket_histogram") - default aggregation to use for histogram instruments. Supported values: The configuration can be overridden by WithAggregationSelector option.
Package negroni is an idiomatic approach to web middleware in Go. It is tiny, non-intrusive, and encourages use of net/http Handlers. If you like the idea of Martini, but you think it contains too much magic, then Negroni is a great fit. For a full guide visit http://github.com/urfave/negroni
Sprig: Template functions for Go. This package contains a number of utility functions for working with data inside of Go `html/template` and `text/template` files. To add these functions, use the `template.Funcs()` method: Note that you should add the function map before you parse any template files. See http://masterminds.github.io/sprig/ for more detailed documentation on each of the available functions.
Package healthcheck contains a monitor which takes a set of liveness checks which it periodically checks. If a check fails after its configured number of allowed call attempts, the monitor will send a request to shutdown using the function is is provided in its config. Checks are dispatched in their own goroutines so that they do not block each other.
Package lru provides three different LRU caches of varying sophistication. Cache is a simple LRU cache. It is based on the LRU implementation in groupcache: https://github.com/golang/groupcache/tree/master/lru TwoQueueCache tracks frequently used and recently used entries separately. This avoids a burst of accesses from taking out frequently used entries, at the cost of about 2x computational overhead and some extra bookkeeping. ARCCache is an adaptive replacement cache. It tracks recent evictions as well as recent usage in both the frequent and recent caches. Its computational overhead is comparable to TwoQueueCache, but the memory overhead is linear with the size of the cache. ARC has been patented by IBM, so do not use it if that is problematic for your program. For this reason, it is in a separate go module contained within this repository. All caches in this package take locks while operating, and are therefore thread-safe for consumers.
Package archiver facilitates convenient, cross-platform, high-level archival and compression operations for a variety of formats and compression algorithms. This package and its dependencies are written in pure Go (not cgo) and have no external dependencies, so they should run on all major platforms. (It also comes with a command for CLI use in the cmd/arc folder.) Each supported format or algorithm has a unique type definition that implements the interfaces corresponding to the tasks they perform. For example, the Tar type implements Reader, Writer, Archiver, Unarchiver, Walker, and several other interfaces. The most common functions are implemented at the package level for convenience: Archive, Unarchive, Walk, Extract, CompressFile, and DecompressFile. With these, the format type is chosen implicitly, and a sane default configuration is used. To customize a format's configuration, create an instance of its struct with its fields set to the desired values. You can also use and customize the handy Default* (replace the wildcard with the format's type name) for a quick, one-off instance of the format's type. To obtain a new instance of a format's struct with the default config, use the provided New*() functions. This is not required, however. An empty struct of any type, for example &Zip{} is perfectly valid, so you may create the structs manually, too. The examples on this page show how either may be done. See the examples in this package for an idea of how to wield this package for common tasks. Most of the examples which are specific to a certain format type, for example Zip, can be applied to other types that implement the same interfaces. For example, using Zip is very similar to using Tar or TarGz (etc), and using Gz is very similar to using Sz or Xz (etc). When creating archives or compressing files using a specific instance of the format's type, the name of the output file MUST match that of the format, to prevent confusion later on. If you absolutely need a different file extension, you may rename the file afterward. Values in this package are NOT safe for concurrent use. There is no performance benefit of reusing them, and since they may contain important state (especially while walking, reading, or writing), it is NOT recommended to reuse values from this package or change their configuration after they are in use.
Package libnetwork provides the basic functionality and extension points to create network namespaces and allocate interfaces for containers to use.
Package opencensus contains Go support for OpenCensus.
Package archiver facilitates convenient, cross-platform, high-level archival and compression operations for a variety of formats and compression algorithms. This package and its dependencies are written in pure Go (not cgo) and have no external dependencies, so they should run on all major platforms. (It also comes with a command for CLI use in the cmd/arc folder.) Each supported format or algorithm has a unique type definition that implements the interfaces corresponding to the tasks they perform. For example, the Tar type implements Reader, Writer, Archiver, Unarchiver, Walker, and several other interfaces. The most common functions are implemented at the package level for convenience: Archive, Unarchive, Walk, Extract, CompressFile, and DecompressFile. With these, the format type is chosen implicitly, and a sane default configuration is used. To customize a format's configuration, create an instance of its struct with its fields set to the desired values. You can also use and customize the handy Default* (replace the wildcard with the format's type name) for a quick, one-off instance of the format's type. To obtain a new instance of a format's struct with the default config, use the provided New*() functions. This is not required, however. An empty struct of any type, for example &Zip{} is perfectly valid, so you may create the structs manually, too. The examples on this page show how either may be done. See the examples in this package for an idea of how to wield this package for common tasks. Most of the examples which are specific to a certain format type, for example Zip, can be applied to other types that implement the same interfaces. For example, using Zip is very similar to using Tar or TarGz (etc), and using Gz is very similar to using Sz or Xz (etc). When creating archives or compressing files using a specific instance of the format's type, the name of the output file MUST match that of the format, to prevent confusion later on. If you absolutely need a different file extension, you may rename the file afterward. Values in this package are NOT safe for concurrent use. There is no performance benefit of reusing them, and since they may contain important state (especially while walking, reading, or writing), it is NOT recommended to reuse values from this package or change their configuration after they are in use.
Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients. The middleware consists of three components. A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as a method on a type for a stateful Policy. Note that HTTP requests made via the same pipeline share the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to avoid race conditions. A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers, and/or query parameters, inject a failure, etc. Once the Policy has successfully completed its request work, it must call the Next() method on the *policy.Request instance in order to pass the request to the next Policy in the chain. When an HTTP response comes back, the Policy then gets a chance to process the response/error. The Policy instance can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response body, etc. Once the Policy has successfully completed its response work, it must return the *http.Response and error instances to its caller. Template for implementing a stateless Policy: Template for implementing a stateful Policy: The Transporter interface is responsible for sending the HTTP request and returning the corresponding HTTP response or error. The Transporter is invoked by the last Policy in the chain. The default Transporter implementation uses a shared http.Client from the standard library. The same stateful/stateless rules for Policy implementations apply to Transporter implementations. To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function. The specified Policy instances form a chain and are invoked in the order provided to NewPipeline followed by the Transporter. Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method. The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter instances. The response/error is then sent through the same chain of Policy instances in reverse order. For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with TransportA. The flow of Request and Response looks like the following: The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request. It also contains some internal state and provides various convenience methods. You create a Request instance by calling the runtime.NewRequest function: If the Request should contain a body, call the SetBody method. A seekable stream is required so that upon retry, the retry Policy instance can seek the stream back to the beginning before retrying the network request and re-uploading the body. Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted. This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as a means to resolve the ambiguity between a field to be excluded and its zero-value. In the above example, Name and Count are defined as pointer-to-type to disambiguate between a missing value (nil) and a zero-value (0) which might have semantic differences. In a PATCH operation, any fields left as nil are to have their values preserved. When updating a Widget's count, one simply specifies the new value for Count, leaving Name nil. To fulfill the requirement for sending a JSON null, the NullValue() function can be used. This sends an explict "null" for Count, indicating that any current value for Count should be deleted. When the HTTP response is received, the *http.Response is returned directly. Each Policy instance can inspect/mutate the *http.Response. To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program. By default the logger writes to stderr. This can be customized by calling log.SetListener, providing a callback that writes to the desired location. Any custom logging implementation MUST provide its own synchronization to handle concurrent invocations. See the docs for the log package for further details. Pageable operations return potentially large data sets spread over multiple GET requests. The result of each GET is a "page" of data consisting of a slice of items. Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T]. The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages and determining if there are more pages to fetch. No IO calls are made until the NextPage() method is invoked. Long-running operations (LROs) are operations consisting of an initial request to start the operation followed by polling to determine when the operation has reached a terminal state. An LRO's terminal state is one of the following values. LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T]. When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started. It does _not_ mean that the widget has been created or updated (or failed to be created/updated). The *runtime.Poller[T] provides APIs for determining the state of the LRO. To wait for the LRO to complete, call the PollUntilDone() method. The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the context is canceled/timed out. Note that LROs can take anywhere from several seconds to several minutes. The duration is operation-dependent. Due to this variant behavior, pollers do _not_ have a preconfigured time-out. Use a context with the appropriate cancellation mechanism as required. Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to recreate the poller. This is achieved via the runtime.Poller[T].ResumeToken() method. Note that a token can only be obtained for a poller that's in a non-terminal state. Also note that any subsequent calls to poller.Poll() might change the poller's state. In this case, a new token should be created. After the token has been obtained, it can be used to recreate an instance of the originating poller. When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken. Resume tokens are unique per service client and operation. Attempting to resume a poller for LRO BeginB() with a token from LRO BeginA() will result in an error. The fake package contains types used for constructing in-memory fake servers used in unit tests. This allows writing tests to cover various success/error conditions without the need for connecting to a live service. Please see https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes.
internal contains content for Azure SDK developers.
Package lint contains a linter for Go source code.
Package ecr provides the API client, operations, and parameter types for Amazon Elastic Container Registry. Amazon Elastic Container Registry (Amazon ECR) is a managed container image registry service. Customers can use the familiar Docker CLI, or their preferred client, to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry for your Docker or Open Container Initiative (OCI) images. Amazon ECR supports private repositories with resource-based permissions using IAM so that specific users or Amazon EC2 instances can access repositories and images. Amazon ECR has service endpoints in each supported Region. For more information, see Amazon ECR endpointsin the Amazon Web Services General Reference.
Package jaeger contains an OpenCensus tracing exporter for AWS Kinesis.
Package fx is a framework that makes it easy to build applications out of reusable, composable modules. Fx applications use dependency injection to eliminate globals without the tedium of manually wiring together function calls. Unlike other approaches to dependency injection, Fx works with plain Go functions: you don't need to use struct tags or embed special types, so Fx automatically works well with most Go packages. Basic usage is explained in the package-level example. If you're new to Fx, start there! Advanced features, including named instances, optional parameters, and value groups, are explained in this section further down. To test functions that use the Lifecycle type or to write end-to-end tests of your Fx application, use the helper functions and types provided by the go.uber.org/fx/fxtest package. Fx constructors declare their dependencies as function parameters. This can quickly become unreadable if the constructor has a lot of dependencies. To improve the readability of constructors like this, create a struct that lists all the dependencies as fields and change the function to accept that struct instead. The new struct is called a parameter struct. Fx has first class support for parameter structs: any struct embedding fx.In gets treated as a parameter struct, so the individual fields in the struct are supplied via dependency injection. Using a parameter struct, we can make the constructor above much more readable: Though it's rarelly necessary to mix the two, constructors can receive any combination of parameter structs and parameters. Result structs are the inverse of parameter structs. These structs represent multiple outputs from a single function as fields. Fx treats all structs embedding fx.Out as result structs, so other constructors can rely on the result struct's fields directly. Without result structs, we sometimes have function definitions like this: With result structs, we can make this both more readable and easier to modify in the future: Some use cases require the application container to hold multiple values of the same type. A constructor that produces a result struct can tag any field with `name:".."` to have the corresponding value added to the graph under the specified name. An application may contain at most one unnamed value of a given type, but may contain any number of named values of the same type. Similarly, a constructor that accepts a parameter struct can tag any field with `name:".."` to have the corresponding value injected by name. Note that both the name AND type of the fields on the parameter struct must match the corresponding result struct. Constructors often have optional dependencies on some types: if those types are missing, they can operate in a degraded state. Fx supports optional dependencies via the `optional:"true"` tag to fields on parameter structs. If an optional field isn't available in the container, the constructor receives the field's zero value. Constructors that declare optional dependencies MUST gracefully handle situations in which those dependencies are absent. The optional tag also allows adding new dependencies without breaking existing consumers of the constructor. The optional tag may be combined with the name tag to declare a named value dependency optional. To make it easier to produce and consume many values of the same type, Fx supports named, unordered collections called value groups. Constructors can send values into value groups by returning a result struct tagged with `group:".."`. Any number of constructors may provide values to this named collection, but the ordering of the final collection is unspecified. Value groups require parameter and result structs to use fields with different types: if a group of constructors each returns type T, parameter structs consuming the group must use a field of type []T. Parameter structs can request a value group by using a field of type []T tagged with `group:".."`. This will execute all constructors that provide a value to that group in an unspecified order, then collect all the results into a single slice. Note that values in a value group are unordered. Fx makes no guarantees about the order in which these values will be produced. By default, when a constructor declares a dependency on a value group, all values provided to that value group are eagerly instantiated. That is undesirable for cases where an optional component wants to constribute to a value group, but only if it was actually used by the rest of the application. A soft value group can be thought of as a best-attempt at populating the group with values from constructors that have already run. In other words, if a constructor's output type is only consumed by a soft value group, it will not be run. Note that Fx randomizes the order of values in the value group, so the slice of values may not match the order in which constructors were run. To declare a soft relationship between a group and its constructors, use the `soft` option on the input group tag (`group:"[groupname],soft"`). This option is only valid for input parameters. With such a declaration, a constructor that provides a value to the 'server' value group will be called only if there's another instantiated component that consumes the results of that constructor. NewHandlerAndLogger will be called because the Logger is consumed by the application, but NewHandler will not be called because it's only consumed by the soft value group. By default, values of type T produced to a value group are consumed as []T. This means that if the producer produces []T, the consumer must consume [][]T. There are cases where it's desirable for the producer (the fx.Out) to produce multiple values ([]T), and for the consumer (the fx.In) consume them as a single slice ([]T). Fx offers flattened value groups for this purpose. To provide multiple values for a group from a result struct, produce a slice and use the `,flatten` option on the group tag. This indicates that each element in the slice should be injected into the group individually. By default, a type that embeds fx.In may not have any unexported fields. The following will return an error if used with Fx. If you have need of unexported fields on such a type, you may opt-into ignoring unexported fields by adding the ignore-unexported struct tag to the fx.In. For example,
SQL Schema migration tool for Go. Key features: To install the library and command line program, use the following: The main command is called sql-migrate. Each command requires a configuration file (which defaults to dbconfig.yml, but can be specified with the -config flag). This config file should specify one or more environments: The `table` setting is optional and will default to `gorp_migrations`. The environment that will be used can be specified with the -env flag (defaults to development). Use the --help flag in combination with any of the commands to get an overview of its usage: The up command applies all available migrations. By contrast, down will only apply one migration by default. This behavior can be changed for both by using the -limit parameter. The redo command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations. Use the status command to see the state of the applied migrations: If you are using MySQL, you must append ?parseTime=true to the datasource configuration. For example: See https://github.com/go-sql-driver/mysql#parsetime for more information. Import sql-migrate into your application: Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later): Then use the Exec function to upgrade your database: Note that n can be greater than 0 even if there is an error: any migration that succeeded will remain applied even if a later one fails. The full set of capabilities can be found in the API docs below. Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations. You can put multiple statements in each block, as long as you end them with a semicolon (;). If you have complex statements which contain semicolons, use StatementBegin and StatementEnd to indicate boundaries: The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename. Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the notransaction option: If you like your Go applications self-contained (that is: a single binary): use packr (https://github.com/gobuffalo/packr) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Use the PackrMigrationSource in your application to find the migrations: If you already have a box and would like to use a subdirectory: As an alternative, but slightly less maintained, you can use bindata (https://github.com/shuLhan/go-bindata) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Then use bindata to generate a .go file with the migrations embedded: The resulting bindata.go file will contain your migrations. Remember to regenerate your bindata.go file whenever you add/modify a migration (go generate will help here, once it arrives). Use the AssetMigrationSource in your application to find the migrations: Both Asset and AssetDir are functions provided by bindata. Then proceed as usual. Adding a new migration source means implementing MigrationSource. The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the Id field.
Package websocket implements the RFC 6455 WebSocket protocol. Deprecated: coder now maintains this library at https://github.com/coder/websocket. https://tools.ietf.org/html/rfc6455 Use Dial to dial a WebSocket server. Use Accept to accept a WebSocket client. Conn represents the resulting WebSocket connection. The examples are the best way to understand how to correctly use the library. The wsjson subpackage contain helpers for JSON and protobuf messages. More documentation at https://nhooyr.io/websocket. The client side supports compiling to Wasm. It wraps the WebSocket browser API. See https://developer.mozilla.org/en-US/docs/Web/API/WebSocket Some important caveats to be aware of: This example demonstrates a echo server. This example demonstrates full stack chat with an automated test.
Package otlpmetrichttp provides an OTLP metrics exporter using HTTP with protobuf payloads. By default the telemetry is sent to https://localhost:4318/v1/metrics. Exporter should be created using New and used with a metric.PeriodicReader. The environment variables described below can be used for configuration. OTEL_EXPORTER_OTLP_ENDPOINT (default: "https://localhost:4318") - target base URL ("/v1/metrics" is appended) to which the exporter sends telemetry. The value must contain a scheme ("http" or "https") and host. The value may additionally contain a port and a path. The value should not contain a query string or fragment. The configuration can be overridden by OTEL_EXPORTER_OTLP_METRICS_ENDPOINT environment variable and by WithEndpoint, WithEndpointURL, and WithInsecure options. OTEL_EXPORTER_OTLP_METRICS_ENDPOINT (default: "https://localhost:4318/v1/metrics") - target URL to which the exporter sends telemetry. The value must contain a scheme ("http" or "https") and host. The value may additionally contain a port and a path. The value should not contain a query string or fragment. The configuration can be overridden by WithEndpoint, WithEndpointURL, WithInsecure, and WithURLPath options. OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_METRICS_HEADERS (default: none) - key-value pairs used as headers associated with HTTP requests. The value is expected to be represented in a format matching the W3C Baggage HTTP Header Content Format, except that additional semi-colon delimited metadata is not supported. Example value: "key1=value1,key2=value2". OTEL_EXPORTER_OTLP_METRICS_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS. The configuration can be overridden by WithHeaders option. OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_METRICS_TIMEOUT (default: "10000") - maximum time in milliseconds the OTLP exporter waits for each batch export. OTEL_EXPORTER_OTLP_METRICS_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT. The configuration can be overridden by WithTimeout option. OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_METRICS_COMPRESSION (default: none) - compression strategy the exporter uses to compress the HTTP body. Supported values: "gzip". OTEL_EXPORTER_OTLP_METRICS_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION. The configuration can be overridden by WithCompression option. OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE (default: none) - filepath to the trusted certificate to use when verifying a server's TLS credentials. OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE. The configuration can be overridden by WithTLSClientConfig option. OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE (default: none) - filepath to the client certificate/chain trust for client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE. The configuration can be overridden by WithTLSClientConfig option. OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY (default: none) - filepath to the client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY. The configuration can be overridden by WithTLSClientConfig option. OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE (default: "cumulative") - aggregation temporality to use on the basis of instrument kind. Supported values: The configuration can be overridden by WithTemporalitySelector option. OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION (default: "explicit_bucket_histogram") - default aggregation to use for histogram instruments. Supported values: The configuration can be overridden by WithAggregationSelector option.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package ecrpublic provides the API client, operations, and parameter types for Amazon Elastic Container Registry Public. Amazon Elastic Container Registry Public (Amazon ECR Public) is a managed container image registry service. Amazon ECR provides both public and private registries to host your container images. You can use the Docker CLI or your preferred client to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry for your Docker or Open Container Initiative (OCI) images. Amazon ECR supports public repositories with this API. For information about the Amazon ECR API for private repositories, see Amazon Elastic Container Registry API Reference.
Package generated provides a function that parses a source file and reports whether it contains a "// Code generated … DO NOT EDIT." line comment. It implements the specification at https://go.dev/s/generatedcode. The first priority is correctness (no false negatives, no false positives). It must return accurate results even if the input source code is formatted unconventionally. The second priority is performance. The current version uses bufio.Reader and ReadBytes. Performance can be optimized further by using lower level I/O primitives and allocating less. That can be explored later.
Package contains a program that generates code to register a directory and its contents as zip data for statik file system.
Package endpointdiscovery provides a feature implemented in the AWS SDK for Go V2 that allows client to fetch a valid endpoint to serve an API request. Discovered endpoints are stored in an internal thread-safe cache to reduce the number of calls made to fetch the endpoint. Endpoint discovery stores endpoint by associating to a generated cache key. Cache key is built using service-modeled sdkId and any service-defined input identifiers provided by the customer. Endpoint cache keys follow the grammar: The endpoint discovery cache implementation is internal. Clients resolves the cache size to 10 entries. Each entry may contain multiple host addresses as returned by the service. Each discovered endpoint has a TTL associated to it, and are evicted from cache lazily i.e. when client tries to retrieve an endpoint but finds an expired entry instead. Endpoint discovery feature can be turned on by setting the `AWS_ENABLE_ENDPOINT_DISCOVERY` env variable to TRUE. By default, the feature is set to AUTO - indicating operations that require endpoint discovery always use it. To completely turn off the feature, one should set the value as FALSE. Similar configuration rules apply for shared config file where key is `endpoint_discovery_enabled`.
Package attributesprocessor contains the logic to modify attributes of a span. It supports insert, update, upsert and delete as actions.
Package trillian contains the generated protobuf code for the Trillian API.