Package iris implements the highest realistic performance, easy to learn Go web framework. Iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Low-level handlers compatible with `net/http` and high-level fastest MVC implementation and handlers dependency injection. Easy to learn for new gophers and advanced features for experienced, it goes as far as you dive into it! Source code and other details for the project are available at GitHub: 12.2.11 The only requirement is the Go Programming Language, at least version 1.22. Wiki: Examples: Middleware: Home Page:
Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients. The middleware consists of three components. A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as a method on a type for a stateful Policy. Note that HTTP requests made via the same pipeline share the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to avoid race conditions. A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers, and/or query parameters, inject a failure, etc. Once the Policy has successfully completed its request work, it must call the Next() method on the *policy.Request instance in order to pass the request to the next Policy in the chain. When an HTTP response comes back, the Policy then gets a chance to process the response/error. The Policy instance can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response body, etc. Once the Policy has successfully completed its response work, it must return the *http.Response and error instances to its caller. Template for implementing a stateless Policy: Template for implementing a stateful Policy: The Transporter interface is responsible for sending the HTTP request and returning the corresponding HTTP response or error. The Transporter is invoked by the last Policy in the chain. The default Transporter implementation uses a shared http.Client from the standard library. The same stateful/stateless rules for Policy implementations apply to Transporter implementations. To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function. The specified Policy instances form a chain and are invoked in the order provided to NewPipeline followed by the Transporter. Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method. The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter instances. The response/error is then sent through the same chain of Policy instances in reverse order. For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with TransportA. The flow of Request and Response looks like the following: The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request. It also contains some internal state and provides various convenience methods. You create a Request instance by calling the runtime.NewRequest function: If the Request should contain a body, call the SetBody method. A seekable stream is required so that upon retry, the retry Policy instance can seek the stream back to the beginning before retrying the network request and re-uploading the body. Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted. This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as a means to resolve the ambiguity between a field to be excluded and its zero-value. In the above example, Name and Count are defined as pointer-to-type to disambiguate between a missing value (nil) and a zero-value (0) which might have semantic differences. In a PATCH operation, any fields left as nil are to have their values preserved. When updating a Widget's count, one simply specifies the new value for Count, leaving Name nil. To fulfill the requirement for sending a JSON null, the NullValue() function can be used. This sends an explict "null" for Count, indicating that any current value for Count should be deleted. When the HTTP response is received, the *http.Response is returned directly. Each Policy instance can inspect/mutate the *http.Response. To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program. By default the logger writes to stderr. This can be customized by calling log.SetListener, providing a callback that writes to the desired location. Any custom logging implementation MUST provide its own synchronization to handle concurrent invocations. See the docs for the log package for further details. Pageable operations return potentially large data sets spread over multiple GET requests. The result of each GET is a "page" of data consisting of a slice of items. Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T]. The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages and determining if there are more pages to fetch. No IO calls are made until the NextPage() method is invoked. Long-running operations (LROs) are operations consisting of an initial request to start the operation followed by polling to determine when the operation has reached a terminal state. An LRO's terminal state is one of the following values. LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T]. When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started. It does _not_ mean that the widget has been created or updated (or failed to be created/updated). The *runtime.Poller[T] provides APIs for determining the state of the LRO. To wait for the LRO to complete, call the PollUntilDone() method. The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the context is canceled/timed out. Note that LROs can take anywhere from several seconds to several minutes. The duration is operation-dependent. Due to this variant behavior, pollers do _not_ have a preconfigured time-out. Use a context with the appropriate cancellation mechanism as required. Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to recreate the poller. This is achieved via the runtime.Poller[T].ResumeToken() method. Note that a token can only be obtained for a poller that's in a non-terminal state. Also note that any subsequent calls to poller.Poll() might change the poller's state. In this case, a new token should be created. After the token has been obtained, it can be used to recreate an instance of the originating poller. When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken. Resume tokens are unique per service client and operation. Attempting to resume a poller for LRO BeginB() with a token from LRO BeginA() will result in an error. The fake package contains types used for constructing in-memory fake servers used in unit tests. This allows writing tests to cover various success/error conditions without the need for connecting to a live service. Please see https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes.
Package fx is a framework that makes it easy to build applications out of reusable, composable modules. Fx applications use dependency injection to eliminate globals without the tedium of manually wiring together function calls. Unlike other approaches to dependency injection, Fx works with plain Go functions: you don't need to use struct tags or embed special types, so Fx automatically works well with most Go packages. Basic usage is explained in the package-level example. If you're new to Fx, start there! Advanced features, including named instances, optional parameters, and value groups, are explained in this section further down. To test functions that use the Lifecycle type or to write end-to-end tests of your Fx application, use the helper functions and types provided by the go.uber.org/fx/fxtest package. Fx constructors declare their dependencies as function parameters. This can quickly become unreadable if the constructor has a lot of dependencies. To improve the readability of constructors like this, create a struct that lists all the dependencies as fields and change the function to accept that struct instead. The new struct is called a parameter struct. Fx has first class support for parameter structs: any struct embedding fx.In gets treated as a parameter struct, so the individual fields in the struct are supplied via dependency injection. Using a parameter struct, we can make the constructor above much more readable: Though it's rarelly necessary to mix the two, constructors can receive any combination of parameter structs and parameters. Result structs are the inverse of parameter structs. These structs represent multiple outputs from a single function as fields. Fx treats all structs embedding fx.Out as result structs, so other constructors can rely on the result struct's fields directly. Without result structs, we sometimes have function definitions like this: With result structs, we can make this both more readable and easier to modify in the future: Some use cases require the application container to hold multiple values of the same type. A constructor that produces a result struct can tag any field with `name:".."` to have the corresponding value added to the graph under the specified name. An application may contain at most one unnamed value of a given type, but may contain any number of named values of the same type. Similarly, a constructor that accepts a parameter struct can tag any field with `name:".."` to have the corresponding value injected by name. Note that both the name AND type of the fields on the parameter struct must match the corresponding result struct. Constructors often have optional dependencies on some types: if those types are missing, they can operate in a degraded state. Fx supports optional dependencies via the `optional:"true"` tag to fields on parameter structs. If an optional field isn't available in the container, the constructor receives the field's zero value. Constructors that declare optional dependencies MUST gracefully handle situations in which those dependencies are absent. The optional tag also allows adding new dependencies without breaking existing consumers of the constructor. The optional tag may be combined with the name tag to declare a named value dependency optional. To make it easier to produce and consume many values of the same type, Fx supports named, unordered collections called value groups. Constructors can send values into value groups by returning a result struct tagged with `group:".."`. Any number of constructors may provide values to this named collection, but the ordering of the final collection is unspecified. Value groups require parameter and result structs to use fields with different types: if a group of constructors each returns type T, parameter structs consuming the group must use a field of type []T. Parameter structs can request a value group by using a field of type []T tagged with `group:".."`. This will execute all constructors that provide a value to that group in an unspecified order, then collect all the results into a single slice. Note that values in a value group are unordered. Fx makes no guarantees about the order in which these values will be produced. By default, when a constructor declares a dependency on a value group, all values provided to that value group are eagerly instantiated. That is undesirable for cases where an optional component wants to constribute to a value group, but only if it was actually used by the rest of the application. A soft value group can be thought of as a best-attempt at populating the group with values from constructors that have already run. In other words, if a constructor's output type is only consumed by a soft value group, it will not be run. Note that Fx randomizes the order of values in the value group, so the slice of values may not match the order in which constructors were run. To declare a soft relationship between a group and its constructors, use the `soft` option on the input group tag (`group:"[groupname],soft"`). This option is only valid for input parameters. With such a declaration, a constructor that provides a value to the 'server' value group will be called only if there's another instantiated component that consumes the results of that constructor. NewHandlerAndLogger will be called because the Logger is consumed by the application, but NewHandler will not be called because it's only consumed by the soft value group. By default, values of type T produced to a value group are consumed as []T. This means that if the producer produces []T, the consumer must consume [][]T. There are cases where it's desirable for the producer (the fx.Out) to produce multiple values ([]T), and for the consumer (the fx.In) consume them as a single slice ([]T). Fx offers flattened value groups for this purpose. To provide multiple values for a group from a result struct, produce a slice and use the `,flatten` option on the group tag. This indicates that each element in the slice should be injected into the group individually. By default, a type that embeds fx.In may not have any unexported fields. The following will return an error if used with Fx. If you have need of unexported fields on such a type, you may opt-into ignoring unexported fields by adding the ignore-unexported struct tag to the fx.In. For example,
Package inject provides utilities for mapping and injecting dependencies in various ways.
Package inject provides a reflect based injector. A large application built with dependency injection in mind will typically involve the boring work of setting up the object graph. This library attempts to take care of this boring work by creating and connecting the various objects. Its use involves you seeding the object graph with some (possibly incomplete) objects, where the underlying types have been tagged for injection. Given this, the library will populate the objects creating new ones as necessary. It uses singletons by default, supports optional private instances as well as named instances. It works using Go's reflection package and is inherently limited in what it can do as opposed to a code-gen system with respect to private fields. The usage pattern for the library involves struct tags. It requires the tag format used by the various standard libraries, like json, xml etc. It involves tags in one of the three forms below: The first no value syntax is for the common case of a singleton dependency of the associated type. The second triggers creation of a private instance for the associated type. Finally the last form is asking for a named dependency called "dev logger".
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). If you want to cache your MFA logins, use AuthTypeUsernamePasswordMFA authenticator. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. disableOCSPChecks: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. insecureMode: deprecated. Use disableOCSPChecks instead. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. # Connection Config You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTERATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: GOSNOWFLAKE_SKIP_REGISTERATION should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables SNOWFLAKE_HOME(connections.toml file directory) SNOWFLAKE_DEFAULT_CONNECTION_NAME(DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. If you want to define S3 client logging, override S3LoggingMode variable using configuration: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode Example: A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithMapValuesNullable and WithArrayValuesNullable contexts to handle null values in, respectively, maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. When working with the arrow columnar format in go driver, ArrowBatch structs are used. These are structs mostly corresponding to data chunks received from the backend. They allow for access to specific arrow.Record structs. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `WithArrowBatches` context, similar to the following: This returns []*ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -WithArrowBatchesTimestampOption -WithHigherPrecision -WithArrowBatchesUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context WithArrowBatchesUtf8Validation, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. WHen using NUMBERs with non zero scale, the value is returned as an integer type and a scale is provided in record metadata. Example. When we have a 123.45 value that comes from NUMBER(9, 4), it will be represented as 1234500 with scale equal to 4. It is a client responsibility to interpret it correctly. Also - see limitations section above. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression.
Package controllerruntime alias' common functions and types to improve discoverability and reduce the number of imports for simple Controllers. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support
Package bus is a minimalist event/message bus implementation for internal communication The package requires a unique id generator to assign ids to events. You can write your own function to generate unique ids or use a package that provides unique id generation functionality. The `bus` package respect to software design choice of the packages/projects. It supports both singleton and dependency injection to init a `bus` instance. Here is a sample initilization using `monoton` id generator: Example code for configuration: To emit events to the topics, topic names should be registered first: Example code: To receive topic events you need to register handlers; A handler basically requires two vals which are a `Handle` function and topic `Matcher` regex pattern. Example code: Example code: When an event is emitted, the topic handlers receive the event synchronously. It is highly recommended to process events asynchronous. Package leave the decision to the packages/projects to use concurrency abstractions depending on use-cases. Each handlers receive the same event as ref of `bus.Event` struct.
Package container provides an IoC container for Go projects. It provides simple, fluent and easy-to-use interface to make dependency injection in GoLang easier.
Package inject provides a reflect based injector. A large application built with dependency injection in mind will typically involve the boring work of setting up the object graph. This library attempts to take care of this boring work by creating and connecting the various objects. Its use involves you seeding the object graph with some (possibly incomplete) objects, where the underlying types have been tagged for injection. Given this, the library will populate the objects creating new ones as necessary. It uses singletons by default, supports optional private instances as well as named instances. It works using Go's reflection package and is inherently limited in what it can do as opposed to a code-gen system with respect to private fields. The usage pattern for the library involves struct tags. It requires the tag format used by the various standard libraries, like json, xml etc. It involves tags in one of the three forms below: The first no value syntax is for the common case of a singleton dependency of the associated type. The second triggers creation of a private instance for the associated type. Finally the last form is asking for a named dependency called "dev logger".
Package bus is a minimalist event/message bus implementation for internal communication The package requires a unique id generator to assign ids to events. You can write your own function to generate unique ids or use a package that provides unique id generation functionality. The `bus` package respect to software design choice of the packages/projects. It supports both singleton and dependency injection to init a `bus` instance. Here is a sample initilization using `monoton` id generator: Example code for configuration: To emit events to the topics, topic names should be registered first: Example code: To receive topic events you need to register handlers; A handler basically requires two vals which are a `Handle` function and topic `Matcher` regex pattern. Example code: Example code: When an event is emitted, the topic handlers receive the event synchronously. It is highly recommended to process events asynchronous. Package leave the decision to the packages/projects to use concurrency abstractions depending on use-cases. Each handlers receive the same event as ref of `bus.Event` struct.
Package hiboot is a web/cli app application framework Hiboot is a cloud native web and cli application framework written in Go. Hiboot integrates the popular libraries but make them simpler, easier to use. It borrowed some of the Spring features like dependency injection, aspect oriented programming, and auto configuration. You can integrate any other libraries easily by auto configuration with dependency injection support. hiboot-data is the typical project that implement customized hiboot starters. see https://godoc.org/hidevops.io/hiboot-data Overview One of the most significant feature of Hiboot is Dependency Injection. Hiboot implements JSR-330 standard. Let's say that we have two implementations of AuthenticationService, below will explain how does Hiboot work. In Hiboot the injection into fields is triggered by `inject:""` struct tag. when inject tag is present on a field, Hiboot tries to resolve the object to inject by the type of the field. If several implementations of the same service interface are available, you have to disambiguate which implementation you want to be injected. This can be done by naming the field to specific implementation. Although Field Injection is pretty convenient, but the Constructor Injection is the first-class citizen, we usually advise people to use constructor injection as it has below advantages, It's testable, easy to implement unit test. Syntax validation, with syntax validation on most of the IDEs to avoid typo. No need to use a dedicated mechanism to ensure required properties are set. type userController struct { at.RestController basicAuthenticationService AuthenticationService } // Hiboot will inject the implementation of AuthenticationService func newUserController(basicAuthenticationService AuthenticationService) { return &userController{ basicAuthenticationService: basicAuthenticationService, } } func init() { app.Register(newUserController) } Features This section will show you how to create and run a simplest hiboot application. Let’s get started! Get the source code Source Code This is a simple hello world example
Package argmapper is a dependency-injection library for Go. go-argmapper supports named values, typed values, automatically chaining conversion functions to reach desired types, and more. go-argmapper is designed for runtime, reflection-based dependency injection. The primary usage of this library is via the Func struct. See Func for more documentation.
Package xstats is a generic client for service instrumentation. xstats is inspired from Go-kit's metrics (https://github.com/go-kit/kit/tree/master/metrics) package but it takes a slightly different path. Instead of having to create an instance for each metric, xstats use a single instance to log every metrics you want. This reduces the boiler plate when you have a lot a metrics in your app. It's also easier in term of dependency injection. Talking about dependency injection, xstats comes with a xhandler.Handler integration so it can automatically inject the xstats client within the net/context of each request. Each request's xstats instance have its own tags storage ; This let you inject some per request contextual tags to be included with all observations sent within the lifespan of the request. xstats is pluggable and comes with integration for StatsD and DogStatsD, the Datadog (http://datadoghq.com) augmented version of StatsD with support for tags. More integration may come later (PR welcome).
Package di provides opinionated way to connect your application components. Container allows you to inject dependencies into constructors or structures without the need to have specified each argument manually.
Package fx is a framework that makes it easy to build applications out of reusable, composable modules. Fx applications use dependency injection to eliminate globals without the tedium of manually wiring together function calls. Unlike other approaches to dependency injection, Fx works with plain Go functions: you don't need to use struct tags or embed special types, so Fx automatically works well with most Go packages. Basic usage is explained in the package-level example below. If you're new to Fx, start there! Advanced features, including named instances, optional parameters, and value groups, are explained under the In and Out types. To test functions that use the Lifecycle type or to write end-to-end tests of your Fx application, use the helper functions and types provided by the go.uber.org/fx/fxtest package.
Package inject provides utilities for mapping and injecting dependencies in various ways.
Package ioc provides an IoC container for Go projects. It provides simple, fluent and easy-to-use interface to make dependency injection in GoLang easier.
Package hiboot is a web/cli app application framework Hiboot is a cloud native web and cli application framework written in Go. Hiboot integrates the popular libraries but make them simpler, easier to use. It borrowed some of the Spring features like dependency injection, aspect oriented programming, and auto configuration. You can integrate any other libraries easily by auto configuration with dependency injection support. hiboot-data is the typical project that implement customized hiboot starters. see https://godoc.org/github.com/hidevopsio/hiboot-data Overview One of the most significant feature of Hiboot is Dependency Injection. Hiboot implements JSR-330 standard. Let's say that we have two implementations of AuthenticationService, below will explain how does Hiboot work. In Hiboot the injection into fields is triggered by `inject:""` struct tag. when inject tag is present on a field, Hiboot tries to resolve the object to inject by the type of the field. If several implementations of the same service interface are available, you have to disambiguate which implementation you want to be injected. This can be done by naming the field to specific implementation. Although Field Injection is pretty convenient, but the Constructor Injection is the first-class citizen, we usually advise people to use constructor injection as it has below advantages, It's testable, easy to implement unit test. Syntax validation, with syntax validation on most of the IDEs to avoid typo. No need to use a dedicated mechanism to ensure required properties are set. type userController struct { at.RestController basicAuthenticationService AuthenticationService } // Hiboot will inject the implementation of AuthenticationService func newUserController(basicAuthenticationService AuthenticationService) { return &userController{ basicAuthenticationService: basicAuthenticationService, } } func init() { app.Register(newUserController) } Features This section will show you how to create and run a simplest hiboot application. Let’s get started! Get the source code Source Code This is a simple hello world example
Package fault provides standard http middleware for fault injection in go. Use the fault package to inject faults into the http request path of your service. Faults work by modifying and/or delaying your service's http responses. Place the Fault middleware high enough in the chain that it can act quickly, but after any other middlewares that should complete before fault injection (auth, redirects, etc...). The type and severity of injected faults is controlled by options passed to NewFault(Injector, Options). NewFault must be passed an Injector, which is an interface that holds the actual fault injection code in Injector.Handler. The Fault wraps Injector.Handler in another Fault.Handler that applies generic Fault logic (such as what % of requests to run the Injector on) to the Injector. Make sure you use the NewFault() and NewTypeInjector() constructors to create valid Faults and Injectors. There are three main Injectors provided by the fault package: Use fault.RejectInjector to immediately return an empty response. For example, a curl for a rejected response will produce: Use fault.ErrorInjector to immediately return a valid http status code of your choosing along with the standard HTTP response body for that code. For example, you can return a 200, 301, 418, 500, or any other valid status code to test how your clients respond to different statuses. Pass the WithStatusText() option to customize the response text. Use fault.SlowInjector to wait a configured time.Duration before proceeding with the request. For example, you can use the SlowInjector to add a 10ms delay to your requests. Use fault.RandomInjector to randomly choose one of the above faults to inject. Pass a list of Injector to fault.NewRandomInjector and when RandomInjector is evaluated it will randomly run one of the injectors that you passed. It is easy to combine any of the Injectors into a chained action. There are two ways you might want to combine Injectors. First, you can create separate Faults for each Injector that are sequential but independent of each other. For example, you can chain Faults such that 1% of requests will return a 500 error and another 1% of requests will be rejected. Second, you might want to combine Faults such that 1% of requests will be slowed for 10ms and then rejected. You want these Faults to depend on each other. For this use the special ChainInjector, which consolidates any number of Injectors into a single Injector that runs each of the provided Injectors sequentially. When you add the ChainInjector to a Fault the entire chain will always execute together. The NewFault() constructor has WithPathBlocklist() and WithPathAllowlist() options. Any path you include in the PathBlocklist will never have faults run against it. With PathAllowlist, if you provide a non-empty list then faults will not be run against any paths except those specified in PathAllowlist. The PathBlocklist take priority over the PathAllowlist, a path in both lists will never have a fault run against it. The paths that you include must match exactly the path in req.URL.Path, including leading and trailing slashes. Simmilarly, you may also use WithHeaderBlocklist() and WithHeaderAllowlist() to block or allow faults based on a map of header keys to values. These lists behave in the same way as the path allowlists and blocklists except that they operate on headers. Header equality is determined using http.Header.Get(key) which automatically canonicalizes your keys and does not support multi-value headers. Keep these limitations in mind when working with header allowlists and blocklists. Specifying very large lists of paths or headers may cause memory or performance issues. If you're running into these problems you should instead consider using your http router to enable the middleware on only a subset of your routes. The fault package provides an Injector interface and you can satisfy that interface to provide your own Injector. Use custom injectors to add additional logic to the package-provided injectors or to create your own completely new Injector that can still be managed by a Fault. The package provides a Reporter interface that can be added to Faults and Injectors using the WithReporter option. A Reporter will receive events when the state of the Injector changes. For example, Reporter.Report(InjectorName, StateStarted) is run at the beginning of all Injectors. The Reporter is meant to be provided by the consumer of the package and integrate with services like stats and logging. The default Reporter throws away all events. By default all randomness is seeded with defaultRandSeed(1), the same default as math/rand. This helps you reproduce any errors you see when running an Injector. If you prefer, you can also customize the seed passing WithRandSeed() to NewFault and NewRandomInjector. Some Injectors support customizing the functions they use to run their injections. You can take advantage of these options to add your own logic to an existing Injector instead of creating your own. For example, modify the SlowInjector function to slow in a rancom distribution instead of for a fixed duration. Be careful when you use these options that your return values fall within the same range of values expected by the default functions to avoid panics or other undesirable begavior. Customize the function a Fault uses to determine participation (default: rand.Float32) by passing WithRandFloat32Func() to NewFault(). Customize the function a RandomInjector uses to choose which injector to run (default: rand.Intn) by passing WithRandIntFunc() to NewRandomInjector(). Customize the function a SlowInjector uses to wait (default: time.Sleep) by passing WithSlowFunc() to NewSlowInjector(). Configuration for the fault package is done through options passed to NewFault and NewInjector. Once a Fault is created its enabled state and participation percentage can be updated with SetEnabled() and SetParticipation(). There is no other way to manage configuration for the package. It is up to the user of the fault package to manage how the options are generated. Common options are feature flags, environment variables, or code changes in deploys. Example is a package-level documentation example.
Package iris implements the highest realistic performance, easy to learn Go web framework. Iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Low-level handlers compatible with `net/http` and high-level fastest MVC implementation and handlers dependency injection. Easy to learn for new gophers and advanced features for experienced, it goes as far as you dive into it! Source code and other details for the project are available at GitHub: 12.2.11 The only requirement is the Go Programming Language, at least version 1.22. Wiki: Examples: Middleware: Home Page:
Package inject provides utilities for mapping and injecting dependencies in various ways.
Package fx is a framework that makes it easy to build applications out of reusable, composable modules. Fx applications use dependency injection to eliminate globals without the tedium of manually wiring together function calls. Unlike other approaches to dependency injection, Fx works with plain Go functions: you don't need to use struct tags or embed special types, so Fx automatically works well with most Go packages. Basic usage is explained in the package-level example below. If you're new to Fx, start there! Advanced features, including named instances, optional parameters, and value groups, are explained under the In and Out types. To test functions that use the Lifecycle type or to write end-to-end tests of your Fx application, use the helper functions and types provided by the go.uber.org/fx/fxtest package.
Package bus is a minimalist event/message bus implementation for internal communication The package requires a unique id generator to assign ids to events. You can write your own function to generate unique ids or use a package that provides unique id generation functionality. The `bus` package respect to software design choice of the packages/projects. It supports both singleton and dependency injection to init a `bus` instance. Here is a sample initilization using `monoton` id generator: Example code for configuration: To emit events to the topics, topic names should be registered first: Example code: To receive topic events you need to register handlers; A handler basically requires two vals which are a `Handle` function and topic `Matcher` regex pattern. Example code: Example code: When an event is emitted, the topic handlers receive the event synchronously. It is highly recommended to process events asynchronous. Package leave the decision to the packages/projects to use concurrency abstractions depending on use-cases. Each handlers receive the same event as ref of `bus.Event` struct.
Package fx is a framework that makes it easy to build applications out of reusable, composable modules. Fx applications use dependency injection to eliminate globals without the tedium of manually wiring together function calls. Unlike other approaches to dependency injection, Fx works with plain Go functions: you don't need to use struct tags or embed special types, so Fx automatically works well with most Go packages. Basic usage is explained in the package-level example below. If you're new to Fx, start there! Advanced features, including named instances, optional parameters, and value groups, are explained under the In and Out types. To test functions that use the Lifecycle type or to write end-to-end tests of your Fx application, use the helper functions and types provided by the go.uber.org/fx/fxtest package.
Package di provides opinionated way to connect your application components. Container allows you to inject dependencies into constructors or structures without the need to have specified each argument manually.
Package linker provides Dependency Injection and Inversion of Control functionality. The core component is Injector, which allows to register Components. Component is an object, which can have any type, which requires some initialization, or can be used for initializing other components. Every component is registered in the Injector by the component name or anonymously (empty name). Same object can be registered by different names. This could be useful if the object implements different interfaces that can be used by different components. The package contains several interfaces: PostConstructor, Initializer and Shutdowner, which could be implemented by components with a purpose to be called by Injector on different initialization/de-initialization phases. Init() function of Injector allows to initialize registered components. The initialization process supposes that components with 'pointer to struct' type or interfaces, which contains a 'pointer to struct' will be initialized. The initialization supposes to inject (assign) the struct fields values using other registered components. Injector matches them by name or by type. Injector uses fail-fast strategy so any error is considered like misconfiguraion and a panic happens. When all components are initialized, the components, which implement PostConstructor interface will be notified via PostConsturct() function call. The order of PostConstruct() calls is not defined. After the construction phase, injector builds dependencies graph with a purpose to detect dependency loops and to establish components initialization order. If a dependency loop is found, Injector will panic. Components, which implement Initializer interface, will be notified in specific order by Init(ctx) function call. Less dependant components will be initialized before the components that have dependency on the first ones. Injector is supposed to be called from one go-routine and doesn't support calls from multiple go-routines. Initialization process could take significant time, so context is provided. If the context is cancelled or closed it will be detected either by appropriate component or by the Injector what will cause of de-intializing already initialized components using Shutdown() function call (if provided) in reverse of the initialization order. Panic will happen then.
Package controllerruntime alias' common functions and types to improve discoverability and reduce the number of imports for simple Controllers. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support
Package store defines a simple interface for key-value store dependency injection. Not all implementations are supposed to provide every method. More importantly, the consumers should declare their subset of required methods. The Store interface is not meant to be used directly, but rather to document how the methods should be implemented. Every application will define a specific interface with its required methods only.
Package inject make your dependency injection easy. Container allows you to inject dependencies into constructors or structures without the need to have specified each argument manually. First of all, when creating a new container, you need to describe how to create each instance of a dependency. To do this, use the container option inject.Provide(). Now, container knows how to create *pkg.Dependency and *pkg.AnotherDependency. For advanced providing see inject.Provide() and inject.ProvideOption documentation. After building a container, it is easy to get any previously provided type. To do this, use the container's Extract() method. The container collects a dependencies of *pkg.AnotherDependency, creates its instance and places it in a target pointer. For advanced extraction see Extract() and inject.ExtractOption documentation.
Package nject is a general purpose dependency injection framework. It provides wrapping, pruning, and indirect variable passing. It is type safe and using it requires no type assertions. There are two main injection APIs: Run and Bind. Bind is designed to be used at program initialization and does as much work as possible then rather than during main execution. The API for nject is a list of providers (injectors) that are run in order. The final function in the list must be called. The other functions are called if their value is consumed by a later function that must be called. Here is a simple example: In this example, context.Background and log.Default are not invoked because their outputs are not used by the final function (http.ListenAndServe). The basic idea of nject is to assemble a Collection of providers and then use that collection to supply inputs for functions that may use some or all of the provided types. One big win from dependency injection with nject is the ability to reshape various different functions into a single signature. For example, having a bunch of functions with different APIs all bound as http.HandlerFunc is easy. Providers produce or consume data. The data is distinguished by its type. If you want to three different strings, then define three different types: Then you can have a function that does things with the three types: The above function would be a valid injector or final function in a provider Collection. For example: This creates a sequence and executes it. Run injects a myFirst value and the sequence of providers runs: genSecond() injects a mySecond and myStringFunc() combines the myFirst and mySecond to create a myThird. Then the function given in run saves that final value. The expected output is Providers are grouped as into linear sequences. When building an injection chain, the providers are grouped into several sets: LITERAL, STATIC, RUN. The LITERAL and STATIC sets run once per initialization. The RUN set runs once per invocation. Providers within a set are executed in the order that they were originally specified. Providers whose outputs are not consumed are omitted unless they are marked Required(). Collections are bound with Bind(&invocationFunction, &initializationFunction). The invocationFunction is expected to be used over and over, but the initializationFunction is expected to be used less frequently. The STATIC set is re-invoked each time the initialization function is run. The LITERAL set is just the literal values in the collection. The STATIC set is composed of the cacheable injectors. The RUN set if everything else. All injectors have the following type signature: None of the input or output parameters may be anonymously-typed functions. An anoymously-typed function is a function without a named type. Injectors whose output values are not used by a downstream handler are dropped from the handler chain. They are not invoked. Injectors that have no output values are a special case and they are always retained in the handler chain. In injector that is annotated as Cacheable() may promoted to the STATIC set. An injector that is annotated as MustCache() must be promoted to the STATIC set: if it cannot be promoted then the collection is deemed invalid. An injector may not be promoted to the STATIC set if it takes as input data that comes from a provider that is not in the STATIC or LITERAL sets. For example, arguments to the invocation function, if the invoke function takes an int as one of its inputs, then no injector that takes an int as an argument may be promoted to the STATIC set. Injectors in the STATIC set will be run exactly once per set of input values. If the inputs are consistent, then the output will be a singleton. This is true across injection chains. If the following provider is used in multiple chains, as long as the same integer is injected, all chains will share the same pointer. Injectors in the STATIC set are only run for initialization. For some things, like opening a database, that may still be too often. Injectors that are marked Memoized must be promoted to the static set. Memoized injectors are only run once per combination of inputs. Their outputs are remembered. If called enough times with different arguments, memory will be exhausted. Memoized injectors may not have more than 90 inputs. Memoized injectors may not have any inputs that are go maps, slices, or functions. Arrays, structs, and interfaces are okay. This requirement is recursive so a struct that that has a slice in it is not okay. Fallible injectors are special injectors that change the behavior of the injection chain if they return error. Fallible injectors in the RUN set, that return error will terminate execution of the injection chain. A non-wrapper function that returns nject.TerminalError is a fallible injector. The TerminalError does not have to be the last return value. The nject package converts TerminalError objects into error objects so only the fallible injector should use TerminalError. Anything that consumes the TerminalError should do so by consuming error instead. Fallible injectors can be in both the STATIC set and the RUN set. Their behavior is a bit different. If a non-nil value is returned as the TerminalError from a fallible injector in the RUN set, none of the downstream providers will be called. The provider chain returns from that point with the TerminalError as a return value. Since all return values must be consumed by a middleware provider or the bound invoke function, fallible injectors must come downstream from a middleware handler that takes error as a returned value if the invoke function (function that runs a bound injection chain) does not return error. If a fallible injector returns nil for the TerminalError, the other output values are made available for downstream handlers to consume. The other output values are not considered return values and are not available to be consumed by upstream middleware handlers. The error returned by a fallible injector is not available downstream. If a non-nil value is returned as the TerminalError from a fallible injector in the STATIC set, the rest of the STATIC set will be skipped. If there is an init function and it returns error, then the value returned by the fallible injector will be returned via init function. Unlike fallible injectors in the RUN set, the error output by a fallible injector in the STATIC set is available downstream (but only in the RUN set -- nothing else in the STATIC set will execute). Some examples: A wrap function interrupts the linear sequence of providers. It may or may invoke the remainder of the sequence that comes after it. The remainder of the sequence is provided to the wrap function as a function that it may call. The type signature of a wrap function is a function that receives an function as its first parameter. That function must be of an anonymous type: For example: When this wrappper function runs, it is responsible for invoking the rest of the provider chain. It does this by calling inner(). The parameters to inner are available as inputs to downstream providers. The value(s) returned by inner come from the return values of other wrapper functions and from the return value(s) of the final function. Wrap functions can call inner() zero or more times. The values returned by wrap functions must be consumed by another upstream wrap function or by the init function (if using Bind()). Wrap functions have a small amount of runtime overhead compared to other kinds of functions: one call to reflect.MakeFunc(). Wrap functions serve the same role as middleware, but are usually easier to write. Wrap functions that invoke inner() multiple times in parallel are are not well supported at this time and such invocations must have the wrap function decorated with Parallel(). Final functions are simply the last provider in the chain. They look like regular Go functions. Their input parameters come from other providers. Their return values (if any) must be consumed by an upstream wrapper function or by the init function (if using Bind()). Wrap functions that return error should take error as a returned value so that they do not mask a downstream error. Wrap functions should not return TerminalError because they internally control if the downstream chain is called. Literal values are values in the provider chain that are not functions. Provider chains can be invalid for many reasons: inputs of a type not provided earlier in the chain; annotations that cannot be honored (eg. MustCache & Memoize); return values that are not consumed; functions that take or return functions with an anymous type other than wrapper functions; A chain that does not terminate with a function; etc. Bind() and Run() will return error when presented with an invalid provider chain. Bind() and Run() will return error rather than panic. After Bind()ing an init and invoke function, calling them will not panic unless a provider panic()s A wrapper function can be used to catch panics and turn them into errors. When doing that, it is important to propagate any errors that are coming up the chain. If there is no guaranteed function that will return error, one can be added with Shun(). Bind() uses a complex and somewhat expensive O(n^2) set of rules to evaluate which providers should be included in a chain and which can be dropped. The goal is to keep the ones you want and remove the ones you don't want. Bind() tries to figure this out based on the dependencies and the annotations. MustConsume, not Desired: Only include if at least one output is transitively consumed by a Required or Desired chain element and all outputs are consumed by some other provider. Not MustConsume, not Desired: only include if at least one output is transitively consumed by a Required or Desired provider. Not MustConsume, Desired: Include if all inputs are available. MustConsume, Desired: Only include if all outputs are transitively consumed by a required or Desired chain element. When there are multiple providers of a type, Bind() tries to get it from the closest provider. Providers that have unmet dependencies will be eliminated from the chain unless they're Required. The remainder of this document consists of suggestions for how to use nject. Contributions to this section would be welcome. Also links to blogs or other discussions of using nject in practice. The best practice for using nject inside a large project is to have a few common chains that everyone imports. Most of the time, these common chains will be early in the sequence of providers. Customization of the import chains happens in many places. This is true for services, libraries, and tests. For tests, a wrapper that includes the standard chain makes it easier to write tests. See github.com/memsql/ntest for helper functions and more examples. If nject cannot bind or run a chain, it will return error. The returned error is generally very good, but it does not contain the full debugging output. The full debugging output can be obtained with the DetailedError function. If the detailed error shows that nject has a bug, note that part of the debug output includes a regression test that can be turned into an nject issue. Remove the comments to hide the original type names. The Reorder() decorator allows injection chains to be fully or partially reordered. Reorder is currently limited to a single pass and does not know which injectors are ultimately going to be included in the final chain. It is likely that if you mark your entire chain with Reorder, you'll have unexpected results. On the other hand, Reorder provides safe and easy way to solve some common problems. For example: providing optional options to an injected dependency. Because the default options are marked as Shun, they'll only be included if they have to be included. If a user of thingChain wants to override the options, they simply need to mark their override as Reorder. To make this extra friendly, a helper function to do the override can be provided and used. Recommended best practice is to have injectors shutdown the things they themselves start. They should do their own cleanup. Inside tests, an injector can use t.Cleanup() for this. For services, something like t.Cleanup can easily be built: Alternatively, any wrapper function can do it's own cleanup in a defer that it defines. Wrapper functions have a small runtime performance penalty, so if you have more than a couple of providers that need cleanup, it makes sense to include something like CleaningService. The normal direction of forced inclusion is that an upstream provider is required because a downstream provider uses a type produced by the upstream provider. There are times when the relationship needs to be reversed. For example, a type gets modified by a downstream injector. The simplest option is to combine the providers into one function. Another possibility is to mark the upstream provider with MustConsume and have it produce a type that is only consumed by the downstream provider. Lastly, the providers can be grouped with Cluster so that they'll be included or excluded as a group. Example shows what gets included and what does not for several injection chains. These examples are meant to show the subtlety of what gets included and why. This example explores injecting a database handle or transaction only when they're used.
Package inject make your dependency injection easy. Container allows you to inject dependencies into constructors or structures without the need to have specified each argument manually. First of all, when creating a new container, you need to describe how to create each instance of a dependency. To do this, use the container option inject.Provide(). Now, container knows how to create *pkg.Dependency and *pkg.AnotherDependency. For advanced providing see inject.Provide() and inject.ProvideOption documentation. After building a container, it is easy to get any previously provided type. To do this, use the container's Extract() method. The container collects a dependencies of *pkg.AnotherDependency, creates its instance and places it in a target pointer. For advanced extraction see Extract() and inject.ExtractOption documentation.
Package ioc provides an IoC container for Go projects. It provides simple, fluent and easy-to-use interface to make dependency injection in GoLang easier.
Package goldi implements a lazy dependency injection framework for go. Goldi is MIT-Licensed
Package disgord provides Go bindings for the documented Discord API, and allows for a stateful Client using the Session interface, with the option of a configurable caching system or bypass the built-in caching logic all together. Create a Disgord session to get access to the REST API and socket functionality. In the following example, we listen for new messages and write a "hello" message when our handler function gets fired. Session interface: https://godoc.org/github.com/andersfylling/disgord/#Session Disgord also provides the option to listen for events using a channel. The setup is exactly the same as registering a function. Simply define your channel, add buffering if you need it, and register it as a handler in the `.On` method. Never close a channel without removing the handler from Disgord. You can't directly call Remove, instead you inject a controller to dictate the handler's lifetime. Since you are the owner of the channel, disgord will never close it for you. Here is what it would look like to use the channel for handling events. Please run this in a go routine unless you know what you are doing. Disgord handles sharding for you automatically; when starting the bot, when discord demands you to scale up your shards (during runtime), etc. It also gives you control over the shard setup in case you want to run multiple instances of Disgord (in these cases you must handle scaling yourself as Disgord can not). Sharding is done behind the scenes, so you do not need to worry about any settings. Disgord will simply ask Discord for the recommended amount of shards for your bot on startup. However, to set specific amount of shards you can use the `disgord.ShardConfig` to specify a range of valid shard IDs (starts from 0). starting a bot with exactly 5 shards Running multiple instances each with 1 shard (note each instance must use unique shard ids) Handle scaling options yourself > Note: if you create a CacheConfig you don't have to set every field. > Note: Only LFU is supported. > Note: Lifetime options does not currently work/do anything (yet). A part of Disgord is the control you have; while this can be a good detail for advanced Users, we recommend beginners to utilise the default configurations (by simply not editing the configuration). Example of configuring the Cache: If you just want to change a specific field, you can do so. The fields are always default values. > Note: Disabling caching for some types while activating it for others (eg. disabling Channels, but activating guild caching), can cause items extracted from the Cache to not reflect the true discord state. Example, activated guild but disabled channel caching: The guild is stored to the Cache, but it's Channels are discarded. Guild Channels are dismantled from the guild object and otherwise stored in the channel Cache to improve performance and reduce memory use. So when you extract the cached guild object, all of the channel will only hold their channel ID, and nothing more. To keep it safe and reliable, you can not directly affect the contents of the Cache. Unlike discordgo where everything is mutable, the caching in disgord is immutable. This does reduce performance as a copy must be made (only on new Cache entries), but as a performance freak, I can tell you right now that a simple struct copy is not that expensive. This also means that, as long as discord sends their events properly, the caching will always reflect the true state of discord. If there is a bug in the Cache and you keep getting the incorrect data, please file an issue at github.com/andersfylling/disgord so it can quickly be resolved(!) Whenever you call a REST method from the Session interface; the Cache is always checked first. Upon a Cache hit, no REST request is executed and you get the data from the Cache in return. However, if this is problematic for you or there exist a bug which gives you bad/outdated data, you can bypass it by using Disgord flags. In addition to disgord.IgnoreCache, as shown above, you can pass in other flags such as: disgord.SortByID, disgord.OrderAscending, etc. You can find these flags in the flag.go file. `disgord_diagnosews` will store all the incoming and outgoing JSON data as files in the directory "diagnose-report/packets". The file format is as follows: unix_clientType_direction_shardID_operationCode_sequenceNumber[_eventName].json `json_std` switches out jsoniter with the json package from the std libs. `disgord_removeDiscordMutex` replaces mutexes in discord structures with a empty mutex; removes locking behaviour and any mutex code when compiled. `disgord_parallelism` activates built-in locking in discord structure methods. Eg. Guild.AddChannel(*Channel) does not do locking by default. But if you find yourself using these discord data structures in parallel environment, you can activate the internal locking to reduce race conditions. Note that activating `disgord_parallelism` and `disgord_removeDiscordMutex` at the same time, will cause you to have no locking as `disgord_removeDiscordMutex` affects the same mutexes. `disgord_legacy` adds wrapper methods with the original discord naming. eg. For REST requests you will notice Disgord uses a consistency between update/create/get/delete/set while discord uses edit/update/modify/close/delete/remove/etc. So if you struggle find a REST method, you can enable this build tag to gain access to mentioned wrappers. `disgordperf` does some low level tweaking that can help boost json unmarshalling and drops json validation from Discord responses/events. Other optimizations might take place as well. `disgord_websocket_gorilla` replaces nhooyr/websocket dependency with gorilla/websocket for gateway communication. In addition to the typical REST endpoints for deleting data, you can also use Client/Session.DeleteFromDiscord(...) for basic deletions. If you need to delete a specific range of messages, or anything complex as that; you can't use .DeleteFromDiscord(...). Not every struct has implemented the interface that allows you to call DeleteFromDiscord. Do not fret, if you try to pass a type that doesn't qualify, you get a compile error.
Package Mixer is a small but powerful package for writing modular HTTP handlers in Go. Mixer attempts to solve some of the problems that https://github.com/go-martini/martini tried to solve, but with a smaller, more idiomatic scope. If you liked the idea of dependency injection in Martini, but you think it contained too much magic, then Mixer is a great fit. For a full guide visit http://github.com/codegangsta/mixer
Package goldi implements a lazy dependency injection framework for go. Goldi is MIT-Licensed
Package inject make your dependency injection easy. Container allows you to inject dependencies into constructors or structures without the need to have specified each argument manually. First of all, when creating a new container, you need to describe how to create each instance of a dependency. To do this, use the container option inject.Provide(). Now, container knows how to create *pkg.Dependency and *pkg.AnotherDependency. For advanced providing see inject.Provide() and inject.ProvideOption documentation. After building a container, it is easy to get any previously provided type. To do this, use the container's Extract() method. The container collects a dependencies of *pkg.AnotherDependency, creates its instance and places it in a target pointer. For advanced extraction see Extract() and inject.ExtractOption documentation.
Package gocontainer is a simple dependency injection container Take the following example: First file `main.go` simply gets the repository from the container and prints it we use **MustInvoke** method to simply present the way where we keep type safety Our database implementation uses `init()` function to register db service Our repository accesses earlier on registered db service and following the same patter uses `init()` function to register repository service within container You can disable global container instance by setting gocontainer.GlobalContainer to nil. This package allows you to create many containers.