Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set LogLevel to control logging verbosity and LogPath to specify log file path. In order to enable logging for the driver, you need to write below code in your application Declaring elog variable and calling elog.Initialize() function is mandatory else application would fail with error "runtime error: invalid memory address or nil pointer dereference". You can configure LogLevel and LogPath (i.e. log file directory) as per your requirement. You may skip initializing LogLevel and LogPath values. In such case, it would take default values. Default value for LogLevel is DEBUG while for LogPath is same directory as your application. Other valid values for 'LogLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL" The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): The above example opens a database handle on NPS server 'vmnps-dw10.svl.ibm.com'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package audiodb provides a client for TheAudioDB v1 API. TheAudioDB is a community-driven music metadata database providing artist, album, and track information including artwork. Usage:
Package omdb provides a Go client for the OMDb (Open Movie Database) API. OMDb is a RESTful web service to obtain movie and TV series information. Authentication uses an API key passed as a query parameter. API reference: https://www.omdbapi.com/
Package anidb provides a Go client for the AniDB HTTP XML API. AniDB is a community-maintained anime database. The HTTP API returns XML-encoded data and requires a registered client name and version. API reference: https://wiki.anidb.net/HTTP_API_Definition All API consumers must respect AniDB rate limits: no more than one request every two seconds, and identical requests must be cached locally.
Package rawg provides a client for the RAWG Video Games Database API. RAWG provides information about video games, platforms, genres, publishers, developers, tags, and stores. Authentication is via an API key passed as a query parameter. Usage:
Package tvdb provides a client for TheTVDB API v4. TheTVDB is a community-maintained TV, movie, and people metadata database. This client covers series, movies, episodes, seasons, people, search, artwork, genres, languages, and update endpoints. Authentication uses a two-step flow: provide an API key (and optional subscriber PIN), and the client transparently obtains a JWT bearer token via the /login endpoint on the first request. Obtain an API key from https://thetvdb.com/dashboard/account/apikey.
Package resp3 implements redis RESP3 protocol, which is used from redis 6.0. RESP (REdis Serialization Protocol) is the protocol used in the Redis database, however the protocol is designed to be used by other projects. With the version 3 of the protocol, currently a work in progress design, the protocol aims to become even more generally useful to other systems that want to implement a protocol which is simple, efficient, and with a very large landscape of client libraries implementations. That means you can use this library to access other RESP3 projects. This library contains three important components: Value, Reader and Writer. Value represents a redis command or a redis response. It is a common struct for all RESP3 types. Reader can parse redis responses from redis servers or commands from redis clients. You can use it to implement redis 6.0 clients, no need to pay attention to underlying parsing. Those new features of redis 6.0 can be implemented based on it. Writer is redis writer. You can use it to send commands to redis servers. RESP3 spec can be found at https://github.com/antirez/RESP3. A redis client based on it is just as the below:
Package sqlite provides a pure-Go SQLite client (via modernc.org/sqlite) with launcher lifecycle and health check integration. All tests can use Config{Path: ":memory:"} for fully isolated, self-contained databases. WAL mode, foreign key enforcement, and a busy timeout are enabled by default via the Pragmas config field.
Package gorag provides a Retrieval-Augmented Generation (RAG) framework for Go GoRAG is a comprehensive framework for building RAG applications that combine large language models (LLMs) with vector databases for efficient information retrieval. Key features include: - Circuit breaker pattern for service resilience - Graceful degradation for unreliable services - Lazy loading for efficient memory usage - Observability with metrics, logging, and tracing - Plugin system for extensibility - Connection pooling for efficient resource management - Support for multiple vector stores (Memory, Milvus, Pinecone, Qdrant, Weaviate) - Support for multiple embedding providers (Cohere, Ollama, OpenAI, Voyage) - Support for multiple LLM clients (Anthropic, Azure OpenAI, Ollama, OpenAI) - Support for multiple document parsers (CSV, JSON, Markdown, PDF, etc.) To get started, see the examples in the cmd/gorag directory or refer to the documentation in the docs directory.
Package again provides a minimal, context-aware retry framework for Go with pluggable backoff and jitter strategies and zero external dependencies. The library is designed for backend services, CLI tools, distributed systems, API clients, and database integrations that need reliable retry logic. The simplest way to use the retry framework is with the default configuration: Configure retry behavior with backoff strategies, jitter, and retry conditions: Built-in backoff strategies determine the delay between retry attempts: Jitter strategies randomize delays to prevent thundering herd problems: Control which errors trigger retries: Combine conditions with OnlyIf (AND), AnyOf (OR), and Not (negation). All retry operations respect context cancellation: When all retries are exhausted, a RetryError is returned: Use DoWithValue for operations that return values:
Package costexplorer provides the client and types for making API requests to AWS Cost Explorer Service. The Cost Explorer API allows you to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for Amazon DynamoDB database tables in your production environment. The Cost Explorer API provides the following endpoint: For information about costs associated with the Cost Explorer API, see AWS Cost Management Pricing (https://aws.amazon.com/aws-cost-management/pricing/). See https://docs.aws.amazon.com/goto/WebAPI/ce-2017-10-25 for more information on this service. See costexplorer package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/costexplorer/ To AWS Cost Explorer Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Cost Explorer Service client CostExplorer for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/costexplorer/#New
Package cortexdb provides a lightweight, embeddable vector database for Go AI projects. cortexdb is a 100% pure Go library designed to be the storage kernel for RAG (Retrieval-Augmented Generation) systems. Built on SQLite using modernc.org/sqlite (NO CGO REQUIRED!), it provides vector storage, full-text search (FTS5), knowledge graphs, and chat memory management in a single database file. cortexdb provides high-level APIs for building RAG applications: Build agentic systems with higher-level persistence: Built-in conversation history management: Configure indexing and quantization for production: For deeper control (quantization, logger, text similarity), use core.Config with core.NewWithConfig. cortexdb supports structured logging via core.Config.Logger when using core.NewWithConfig. Run as an MCP stdio server to expose tools to LLM clients: Route user intent before expensive LLM calls: Long-term agent memory with TEMPR retrieval: For more detailed examples, see the examples/ directory.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. --> Important note: for the database object and other objects (schema, role, etc), please always adhere to the rules for Snowflake Object Identifiers; especially https://docs.snowflake.com/en/sql-reference/identifiers-syntax#double-quoted-identifiers. As mentioned in the docs, if you have e.g. a database with mIxEDcAsE naming, as you needed to create it with enclosing it in double quotes, similarly you'll need to reference it also with double quotes when specifying it in the connection string / DSN. In practice, this means you'll need to escape the second pair of double quotes, which are part of the database name, and not the String notation. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials. See "Authenticator Values" section below for supported values. singleAuthenticationPrompt: specifies whether only one authentication should be performed at the same time for authentications that needs human interactions (like MFA or OAuth authorization code). By default it is true. application: Identifies your application to Snowflake Support. disableOCSPChecks: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. OCSP module caches responses internally. If your application is long running, you can enable cache clearing by calling StartOCSPCacheClearer and disable by calling StopOCSPCacheClearer. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour by default or the value of client_session_keep_alive_heartbeat_frequency, if set, to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. client_session_keep_alive_heartbeat_frequency: Number of seconds in-between client attempts to update the token for the session. > The default is 3600 seconds > Minimum value is 900 seconds. A smaller value will be reset to 900 seconds. > Maximum value is 3600 seconds. A larger value will be reset to 3600 seconds. > This parameter is only valid if client_session_keep_alive is set to true. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. certRevocationCheckMode (enabled, advisory, disabled): Specifies the certificate revocation check mode. When enabled, the driver performs a certificate revocation check using CRL. When advisory, the driver performs a certificate revocation check using CRL, but fails the connection only if the certificate is revoked. If the status cannot be determined, the connection is established. When disabled, the driver does not perform a certificate revocation check. Keep in mind that the certificate revocation check with CRLs is a heavy task, both for memory and CPU. The default is disabled. crlAllowCertificatesWithoutCrlURL: if a certificate does not have a CRL URL, the driver will allow the connection to be established. The default is false. SNOWFLAKE_CRL_CACHE_VALIDITY_TIME (environment variable): specifies the validity time of the CRL cache in seconds. crlInMemoryCacheDisabled: set to disable in-memory caching of CRLs. crlOnDiskCacheDisabled: set to disable on-disk caching of CRLs (on-disk cache may help with cold starts). crlDownloadMaxSize: maximum size (in bytes) of a CRL to download. Default is 200MB. SNOWFLAKE_CRL_ON_DISK_CACHE_DIR (environment variable): set to customize the directory for on-disk caching of CRLs. SNOWFLAKE_CRL_ON_DISK_CACHE_REMOVAL_DELAY (environment variable): set the delay (in seconds) for removing the on-disk cache (for debuggability). crlHTTPClientTimeout: customize the HTTP client timeout for downloading CRLs. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection --> Important note: with the default true value, the connection will fail as the validation fails, if you specify a non-existent database/schema/etc name. This is particularly important when you have a miXedCaSE-named object (e.g. database) and you forgot to properly double quote it. This behaviour is still preferable as it provides a very clear, fail-fast indication of the configuration error. If you would still like to forego this validation, which ensures that the driver always connects with proper database, schema etc. and creates a proper context for it, you can set this configuration to false to allow connection with invalid object identifiers. In this case (with this default validation deliberately turned off) the driver cannot guarantee that the actual behaviour inside the session will match with the one you'd expect, i.e. not actually using the database you expect, and so on. tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. logQueryText: when set to true, the full query text will be logged. Be aware that it may include sensitive information. Default value is false. logQueryParameters: when set to true, the parameters will be logged. Requires logQueryText to be enabled first. Be aware that it may include sensitive information. Default value is false. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. To use the internal Snowflake authenticator, specify snowflake (Default). To use programmatic access tokens, specify programmatic_access_token. If you want to cache your MFA logins, specify username_password_mfa. You can pass TOTP in a separate passcode parameter or append it to the password setting in which case you need to set passcodeInPassword = true. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth with token, specify oauth and provide an OAuth Access Token (see the token parameter below). To authenticate via full OAuth flow, specify oauth_authorization_code or oauth_client_credentials and fill relevant parameters (oauthClientId, oauthClientSecret, oauthAuthorizationUrl, oauthTokenRequestUrl, oauthRedirectUri, oauthScope). Specify URLs if you want to use external OAuth2 IdP, otherwise Snowflake will be used as a default IdP. If oauthScope is not configured, the role is used (giving session:role:<roleName> scope). For more information, please reach to official Snowflake documentation. To authenticate via workload identity, specify workload_identity. This option requires workloadIdentityProvider option to be set (AWS, GCP, AZURE, OIDC). When workloadIdentityProvider=AZURE, workloadIdentityEntraResource can be optionally set to customize entra resource used to fetch JWT token. When workloadIdentityProvider=GCP or AWS, workloadIdentityImpersonationPath can be optionally set to customize impersonation path. This is a comma separated list. For GCP the last parameter is a target service account and the rest are chained delegation. For AWS this is the list of role ARNs to assume. For more details, refer to the usage guide: https://docs.snowflake.com/en/user-guide/workload-identity-federation You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTRATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: `GOSNOWFLAKE_SKIP_REGISTRATION` should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables, `SNOWFLAKE_HOME` (`connections.toml` file directory) and `SNOWFLAKE_DEFAULT_CONNECTION_NAME` (DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials If the connection.toml file is readable by others, a warning will be logged. To disable it you need to set the environment variable `SF_SKIP_WARNING_FOR_READ_PERMISSIONS_ON_CONFIG_FILE` to true. It you wish to specify a custom transporter (e.g. to provide a custom TLS config to be used with your custom truststore) pass it through the `NewConnector`. Example: As an alternative, you can use the `RegisterTLSConfig` / `DeregisterTLSConfig` functions as seen in the unit tests: https://github.com/snowflakedb/gosnowflake/blob/v1.16.0/transport_test.go#L127 The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: In addition to environment variables, the Go Snowflake Driver also supports configuring the proxy via connection parameters. When these parameters are provided in the connection string or DSN, they take precedence and any environment proxy settings (HTTP_PROXY, HTTPS_PROXY, NO_PROXY) will be ignored. | Parameter | Description | Default | |-----------------|-----------------------------------------------------------------------------|---------| | `proxyHost` | Hostname or IP address of the proxy server. | | | `proxyPort` | Port number of the proxy server. | | | `proxyUser` | Username for proxy authentication. | | | `proxyPassword` | Password for proxy authentication. | | | `proxyProtocol` | Protocol to use for proxy connection. Valid values: `http`, `https`. | `http` | | `NoProxy“ | Comma-separated list of hosts that should bypass the proxy. | | For more details, please refer to the example in ./cmd/proxyconnection. By default, the driver uses a built-in slog-based logger at ERROR level. The driver automatically masks secrets in all log messages to prevent credential leakage. Users can customize logging in two ways: 1. Using a custom slog.Handler (if you want to use slog with custom formatting): 2. Using a complete custom logger implementation (if you want full control): Important notes: If you want to define S3 client logging, override S3LoggingMode variable using configuration: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode Example: A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. A context containing OpenTelemetry headers for distributed tracing can be created. Each query, both synchronous and asynchronous, run with this context will include the Trace ID and Span ID as metadata. If you are instrumenting your program with OpenTelemetry and exporting telemetry data to Snowflake, then queries run with this context will be properly nested under the appropriate parent span. This can be viewed in the Traces and Logs tab in Snowsight. For example: The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithEmbeddedValuesNullable context to handle null values in maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: By default, DECFLOAT values are returned as string values. If you want to retrieve them as numbers, you have to use the WithDecfloatMappingEnabled context. If higher precision is enabled, the driver will return them as *big.Float values. Otherwise, they will be returned as float64 values. Keep in mind that both float64 and *big.Float are not able to precisely represent some DECFLOAT values. If precision is important, you have to use string representation and use your own library to parse it. You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. Arrow Batches mode is available through the separate `arrowbatches` sub-package (`github.com/snowflakedb/gosnowflake/v2/arrowbatches`). This sub-package provides access to Arrow columnar data using ArrowBatch structs, which correspond to data chunks received from the backend. They allow for access to specific arrow.Record structs. The arrow-compute dependency (which significantly increases binary size) is only pulled in when you import the arrowbatches sub-package. If you don't need Arrow batch support, simply don't import it. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `arrowbatches.WithArrowBatches` context, similar to the following: This returns []*arrowbatches.ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -arrowbatches.WithTimestampOption -WithHigherPrecision -arrowbatches.WithUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context arrowbatches.WithUtf8Validation is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. WHen using NUMBERs with non zero scale, the value is returned as an integer type and a scale is provided in record metadata. Example. When we have a 123.45 value that comes from NUMBER(9, 4), it will be represented as 1234500 with scale equal to 4. It is a client responsibility to interpret it correctly. Also - see limitations section above. How to handle JSON responses in Arrow batches: Due to technical limitations Snowflake backend may return JSON even if client expects Arrow. In that case Arrow batches are not available and the error with code ErrNonArrowResponseInArrowBatches is returned. The response is parsed to regular rows. You can read rows in a way described in transform_batches_to_rows.go example. This has a very strong limitation though - this is a very low level API (Go driver API), so there are no conversions ready. All values are returned as strings. Alternative approach is to rerun a query, but without enabling Arrow batches and use a general Go SQL API instead of driver API. It can be optimized by using `WithRequestID`, so backend returns results from cache. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go ==> Some considerations related to the ServerSessionKeepAlive configuration option in context of asynchronous query execution When SQL Go connection is being closed, it performs the following actions: * stops the scheduled heartbeats (CLIENT_SESSION_KEEP_ALIVE), if it was enabled * cleans up all the http connections which are already idle - doesn't touch the ones which are in active use currently * if Config.ServerSessionKeepAlive is false (default), then actively logs out the current Snowflake session. !! Caveat: If there are any queries which are currently executing in the same Snowflake session (e.g. async queries sent with WithAsyncMode()), then those queries are automatically cancelled from the client side a couple minutes later after the Close() call, as a Snowflake session which has been actively logged out from, cannot sustain any queries. You can govern this behaviour with setting Config.ServerSessionKeepAlive to true; when the corresponding Snowflake session will be kept alive for a long time (determined by the Snowflake engine) even after an explicit Connection.Close() call past the time when the last running query in the session finished executing. The behaviour is also dependent on ABORT_DETACHED_QUERY parameter, please see the detailed explanation in the parameter description at https://docs.snowflake.com/en/sql-reference/parameters#abort-detached-query. As a consequence, best practice would be to isolate all long-running async tasks (especially ones supposed to be continued after the connection is closed) into a separate connection. The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression. The Go Snowflake Driver includes an embedded native library called "minicore" that verifies loading of native Rust extensions on various platforms. By default, minicore is enabled and loaded dynamically at runtime. ## Disabling Minicore There are two ways to disable minicore: 1. **At runtime using environment variable:** 2. **At compile time using build tags:** ## Static Linking On Linux, if the binary is fully statically linked (e.g., built with -linkmode external -extldflags '-static'), the driver automatically detects this and skips minicore loading. Calling dlopen from a statically linked glibc binary would crash with SIGFPE, so the driver inspects the ELF header for a dynamic linker (PT_INTERP) and gracefully skips minicore if none is found. When minicore is disabled (either at runtime, at compile time, or automatically due to static linking), the driver continues to work normally but without the additional functionality provided by the native library. If you force FIPS mode using fips140 GODEBUG option, driver will switch OCSP requests from SHA-1 to SHA-256. Be aware, that Snowflake cache server doesn't support OCSP requests signed with SHA-256, so driver may work slower, and, in case of OCSP cache server unavailability, OCSP requests will fail, and if OCSP is enabled, then connection attempts will fail as well. ==> Relevant configuration - `ConnectionDiagnosticsEnabled` (default: false) - main flag to enable the diagnostics - `ConnectionDiagnosticsAllowlistFile` - specify `/path/to/allowlist.json` to use a specific allowlist file which the driver should parse. If not specified, the driver tries to open `allowlist.json` from the current directory. The `ConnectionDiagnosticsAllowlistFile` is only taken into consideration when `ConnectionDiagnosticsEnabled=true` ==> Flow of operation when `ConnectionDiagnosticsEnabled=true` 1. upon initial startup, driver opens and reads the `allowlist.json` to determine which hosts it needs to connect to, and then for each entry in the allowlist 2. perform a DNS resolution test to see if the hostname is resolvable 3. driver logs an Error, when encountering a _public_ IP address for a host which looks to be a _private_ link hostname 4. checks if proxy is used in the connection 5. sets up a connection; for which we use the same transport which is driven by the driver's config (custom transport, or when OCSP disabled then OCSP-less transport, or by default, the OCSP-enabled transport) 6. for HTTP endpoints, issues a HTTP GET request and see if it connects 7. for HTTPS endpoints, the same , plus 8. the driver exits after performing diagnostics. If you want to use the driver 'normally' after performing connection diagnostics, set `ConnectionDiagnosticsEnabled=false` or remove it from the config
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use Open to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name formats): where all parameters must be escaped or use `Config` and `DSN` to construct a DSN string. The following example opens a database handle with the Snowflake account myaccount where the username is jsmith, password is mypassword, database is mydb, schema is testschema, and warehouse is mywh: The following connection parameters are supported: account <string>: Specifies the name of your Snowflake account, where string is the name assigned to your account by Snowflake. In the URL you received from Snowflake, your account name is the first segment in the domain (e.g. abc123 in https://abc123.snowflakecomputing.com). This parameter is optional if your account is specified after the @ character. If you are not on us-west-2 region or AWS deployment, then append the region after the account name, e.g. “<account>.<region>”. If you are not on AWS deployment, then append not only the region, but also the platform, e.g., “<account>.<region>.<platform>”. Account, region, and platform should be separated by a period (“.”), as shown above. If you are using a global url, then append connection group and "global", e.g., "account-<connection_group>.global". Account and connection group are separated by a dash ("-"), as shown above. region <string>: DEPRECATED. You may specify a region, such as “eu-central-1”, with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using MFA for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection All other parameters are taken as session parameters. For example, TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. :code:`no_proxy=.amazonaws.com` means that AWS S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is NOP; no output is generated. This is intentional for those applications that use the same set of logger parameters not to conflict with glog, which is incorporated in the driver logging framework. In order to enable debug logging for the driver, add a build tag sfdebug to the go tool command lines, for example: For tests, run the test command with the tag along with glog parameters. For example, the following command will generate all acitivty logs in the standard error. Likewise, if you build your application with the tag, you may specify the same set of glog parameters. To get the logs for a specific module, use the -vmodule option. For example, to retrieve the driver.go and connection.go module logs: Note: If your request retrieves no logs, call db.Close() or glog.flush() to flush the glog buffer. Note: The logger may be changed in the future for better logging. Currently if the applications use the same parameters as glog, you cannot collect both application and driver logs at the same time. From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter, e.g., QueryContext, ExecContext. See cmd/selectmany.go for the full example. Queries return SQL column type information in the ColumnType type. The DatabaseTypeName method returns the following strings representing Snowflake data types: Go's database/sql package limits Go's data types to the following for binding and fetching: Fetching data isn't an issue since the database data type is provided along with the data so the Go Snowflake Driver can translate Snowflake data types to Go native data types. When the client binds data to send to the server, however, the driver cannot determine the date/timestamp data types to associate with binding parameters. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake doesn't support the name-based Location types, e.g., America/Los_Angeles. For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this doesn't help reduce memory footprint by itself. Consider Custom JSON Decoder. Experimental: Custom JSON Decoder for parsing Result Set The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. (Private Preview) JWT authentication ** Not recommended for production use until GA Now JWT token is supported when compiling with a golang version of 1.10 or higher. Binary compiled with lower version of golang would return an error at runtime when users try to use JWT authentication feature. To enable this feature, one can construct DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through base64.URLEncoding.EncodeToString() function. On the server side, one can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through base64.StdEncoding.EncodeToString() function. To generate the valid key pair, one can do the following command on the shell script: GET and PUT operations are unsupported.
Package postgres provides a PostgreSQL client built on top of GORM. The package exposes a small, database-agnostic interface (`Client`) plus a fluent query builder (`QueryBuilder`). The concrete implementation (`*Postgres`) wraps a `*gorm.DB` and adds: The active `*gorm.DB` connection pointer is stored in an `atomic.Pointer`. Calls that need a DB snapshot simply load the pointer and run the operation without holding any package-level locks. Reconnection swaps the pointer atomically. Basic usage Transaction usage The Postgres client supports optional observability through the unified `observability.Observer` interface (`github.com/aalemi-dev/stdlib-lab/observability`). If an observer is attached, it will be notified after each operation completes (success or error) with an `observability.OperationContext`. Non-Fx usage (builder pattern): Fx usage (optional injection): The Postgres client emits (at least) the following operation names: Resource conventions: The package provides `FXModule` which constructs `*Postgres` and also exposes it as the `Client` interface, plus lifecycle hooks for starting/stopping monitoring. Package postgres provides PostgreSQL database operations with an interface-first design. The interfaces defined here (Client, QueryBuilder) can be implemented by other database packages (like mariadb) to provide a consistent API across different databases.
Package mariadb provides functionality for interacting with MariaDB and MySQL databases. The mariadb package offers a robust interface for working with MariaDB/MySQL databases, built on top of GORM ORM. It includes connection management, query execution, transaction handling, and migration tools. Core Features: The active `*gorm.DB` connection pointer is stored in an `atomic.Pointer`. Calls that need a DB snapshot simply load the pointer and run the operation without holding any package-level locks. Reconnection swaps the pointer atomically. Basic Usage: Transaction Example: Query Builder Example: Basic Operations: FX Module Integration: This package provides an fx module for easy integration: MariaDB supports optional observability through the Observer interface from the observability package. This allows external systems to track operations without coupling the MariaDB package to specific metrics/tracing implementations. Using WithObserver (non-FX usage): Using FX (automatic injection): The observer receives events for all database operations: Error Handling: All methods in this package return GORM errors directly. This design provides: Basic error handling with GORM errors: For standardized error types, use TranslateError(): Recommended pattern - create a helper function for common error checks: Differences from PostgreSQL Package: This package provides nearly identical functionality to the postgres package, with the following notable differences: Performance Considerations: Thread Safety: All methods on the MariaDB interface are safe for concurrent use by multiple goroutines. The connection pointer is read via an atomic load and can be swapped atomically during reconnection. Package mariadb provides MariaDB/MySQL database operations with an interface-first design. The interfaces defined here (Client, QueryBuilder) provide a consistent API that can be implemented by different database packages.
Package metrics provides Prometheus-based monitoring and metrics collection functionality for Go applications. The metrics package is designed to provide a standardized observability approach with dual HTTP endpoints for system-level and application-level metrics, full control over metric definitions, and integration with the Fx dependency injection framework for easy incorporation into Aleph Alpha services. This package follows the "accept interfaces, return structs" design pattern: The package provides two separate Prometheus endpoints: 1. System Metrics Endpoint (default: :9090) 2. Application Metrics Endpoint (default: :9091) This separation allows: For simple applications or tests, create metrics directly: For production applications using Uber's fx, use the FXModule which provides both the concrete type and interface: To simplify your code and make it metrics-agnostic, use type aliases: This eliminates the need for adapters and allows you to switch implementations by only changing the alias definition. The metrics servers can be configured via environment variables: Set an address to empty string ("") to disable that endpoint: ## 1. Counter - Cumulative metrics that only increase Use counters for tracking totals (requests, errors, bytes processed): ## 2. Gauge - Values that can go up or down Use gauges for current state (active connections, queue depth, temperature): ## 3. Histogram - Distribution tracking with quantiles Use histograms for latency, request sizes, or any value distribution: Prometheus will automatically calculate: ## 4. Summary - Client-side streaming quantiles Use summaries when you need precise quantile calculations on the client side: Here's a complete example of HTTP request instrumentation: Track database operations with detailed labels: Track application-specific business metrics: 1. Label Cardinality: 2. Metric Updates: 3. Histogram vs Summary: All methods on the Metrics struct and all Prometheus collectors are safe for concurrent use by multiple goroutines. No additional synchronization is needed. Exposed metrics can be scraped by Prometheus and visualized in Grafana or any compatible monitoring system. Example Prometheus scrape config: For unit tests, you can create a metrics instance without starting the servers:
Package postgres provides a PostgreSQL-backed implementation of the types.DB interface from github.com/slackmgr/types. It uses pgx v5 with connection pooling (pgxpool) and stores all entities as JSONB with indexed columns for efficient querying. Create a client using New with functional options, call Client.Connect to establish the connection pool, and then Client.Init to create the database schema: Four tables are created automatically by Client.Init (table names are configurable via the corresponding With* options): Each table includes a version column and an attrs JSONB column that holds the full serialised entity. The underlying pgxpool can be tuned with the pool-specific options: WithPoolMaxConnections, WithPoolMinConnections, WithPoolMinIdleConnections, WithPoolMaxConnectionLifetime, WithPoolMaxConnectionIdleTime, WithPoolHealthCheckPeriod, and WithPoolMaxConnectionLifetimeJitter. Closed issues and alerts are assigned an expiry timestamp on write. Default TTLs are 180 days for issues and 30 days for alerts; both are configurable via WithIssuesTimeToLive and WithAlertsTimeToLive. A background goroutine periodically deletes expired rows. Its interval defaults to 1 hour and can be changed with WithTTLCleanupInterval or disabled entirely with WithTTLCleanupDisabled. When Client.Init is called with skipSchemaValidation set to false, it queries information_schema.columns and verifies that every expected column exists with the correct data type and nullability. Pass true to skip this check in environments where the schema is managed externally. SSL behaviour is controlled by WithSSLMode using the SSLMode constants (SSLModeDisable, SSLModeAllow, SSLModePrefer, SSLModeRequire, SSLModeVerifyCA, SSLModeVerifyFull). The default is SSLModePrefer.
Package postgres provides a PostgreSQL-backed implementation of the types.DB interface from github.com/slackmgr/types. It uses pgx v5 with connection pooling (pgxpool) and stores all entities as JSONB with indexed columns for efficient querying. Create a client using New with functional options, call Client.Connect to establish the connection pool, and then Client.Init to create the database schema: Four tables are created automatically by Client.Init (table names are configurable via the corresponding With* options): Each table includes a version column and an attrs JSONB column that holds the full serialised entity. The underlying pgxpool can be tuned with the pool-specific options: WithPoolMaxConnections, WithPoolMinConnections, WithPoolMinIdleConnections, WithPoolMaxConnectionLifetime, WithPoolMaxConnectionIdleTime, WithPoolHealthCheckPeriod, and WithPoolMaxConnectionLifetimeJitter. Closed issues and alerts are assigned an expiry timestamp on write. Default TTLs are 180 days for issues and 30 days for alerts; both are configurable via WithIssuesTimeToLive and WithAlertsTimeToLive. A background goroutine periodically deletes expired rows. Its interval defaults to 1 hour and can be changed with WithTTLCleanupInterval or disabled entirely with WithTTLCleanupDisabled. When Client.Init is called with skipSchemaValidation set to false, it queries information_schema.columns and verifies that every expected column exists with the correct data type and nullability. Pass true to skip this check in environments where the schema is managed externally. SSL behaviour is controlled by WithSSLMode using the SSLMode constants (SSLModeDisable, SSLModeAllow, SSLModePrefer, SSLModeRequire, SSLModeVerifyCA, SSLModeVerifyFull). The default is SSLModePrefer.
Copyright 2023 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package webrisk implements a client for the Web Risk API v4. At a high-level, the implementation does the following: Essentially the query is presented to three major components: The database, the cache, and the API. Each of these may satisfy the query immediately, or may say that it does not know and that the query should be satisfied by the next component. The goal of the database and cache is to satisfy as many queries as possible to avoid using the API. Starting with a user query, a hash of the query is performed to preserve privacy regarded the exact nature of the query. For example, if the query was for a URL, then this would be the SHA256 hash of the URL in question. Given a query hash, we first check the local database (which is periodically synced with the global Web Risk API servers). This database will either tell us that the query is definitely safe, or that it does not have enough information. If we are unsure about the query, we check the local cache, which can be used to satisfy queries immediately if the same query had been made recently. The cache will tell us that the query is either safe, unsafe, or unknown (because the it's not in the cache or the entry expired). If we are still unsure about the query, then we finally query the API server, which is guaranteed to return to us an authoritative answer, assuming no networking failures.
Package toondb provides a Go client for ToonDB, a high-performance embedded document database with HNSW vector search.
Package shards is a client-side database sharding wrapper around database/sql. It works by internally handling the hassle of managing the database/sql.DB connections for each shard (github.com/gustapinto/shards.DB) and by providing some higher level APIs over these (github.com/gustapinto/shards.Querier). Shards also provide some utility functions and data structures, such as a type thread-safe map (see github.com/gustapinto/shards.SafeMap)
Package sessions provides sessions support for net/http and valyala/fasthttp unique with auto-GC, register unlimited number of databases to Load and Update/Save the sessions in external server or to an external (no/or/and sql) database Usage net/http: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.Start(http.ResponseWriter, *http.Request) // destroy a session from the server and client, manager.Destroy(http.ResponseWriter, *http.Request) Usage valyala/fasthttp: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.StartFasthttp(*fasthttp.RequestCtx) // destroy a session from the server and client, manager.DestroyFasthttp(*fasthttp.Request) Note that, now, you can use both fasthttp and net/http within the same sessions manager(.New) instance! So now, you can share sessions between a net/http app and valyala/fasthttp app
Package sessions provides sessions support for net/http and valyala/fasthttp unique with auto-GC, register unlimited number of databases to Load and Update/Save the sessions in external server or to an external (no/or/and sql) database Usage net/http: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.Start(http.ResponseWriter, *http.Request) // destroy a session from the server and client, manager.Destroy(http.ResponseWriter, *http.Request) Usage valyala/fasthttp: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.StartFasthttp(*fasthttp.RequestCtx) // destroy a session from the server and client, manager.DestroyFasthttp(*fasthttp.Request) Note that, now, you can use both fasthttp and net/http within the same sessions manager(.New) instance! So now, you can share sessions between a net/http app and valyala/fasthttp app
Package helix provides a high-availability dual-database client library designed to support "Shared Nothing" architecture. Helix provides robustness through active-active dual writes, sticky reads, and asynchronous reconciliation for independent Cassandra clusters. Helix uses standard Go errors with clear semantics for dual-cluster operations. Write operations (Exec, ExecContext, batch.Exec) follow this error model: When both clusters fail, a types.DualClusterError is returned: Helix defines several sentinel errors for specific scenarios: Check for sentinel errors using errors.Is: Cluster-specific errors are wrapped in types.ClusterError: Context usage follows Go idioms. Two equivalent patterns are supported: Both patterns respect context cancellation and deadlines. For dual-writes, each cluster operation gets an independent timeout context to prevent one slow cluster from blocking the other. Helix uses client-generated timestamps to ensure replay operations are idempotent. Without proper timestamps: Always configure a timestamp provider or use WithTimestamp(): Lightweight Transactions (INSERT ... IF NOT EXISTS, ScanCAS, etc.) are NOT safe in a shared-nothing dual-cluster architecture. Helix does not guarantee correctness for CAS/LWT operations. Example_contextUsage demonstrates two patterns for context handling in queries. Example_errorHandling demonstrates inspecting DualClusterError when both clusters fail.
Package lldb implements a low level database engine. The database model used could be considered a specific implementation of some small(est) intersection of models listed in [1]. As a settled term is lacking, it'll be called here a 'Virtual memory model' (VMM). 2016-07-24: v1.0.4 brings some performance improvements. 2016-07-22: v1.0.3 brings some small performance improvements. 2016-07-12: v1.0.2 now uses packages from cznic/internal. 2016-07-12: v1.0.1 adds a license for testdata/fortunes.txt. 2016-07-11: First standalone release v1.0.0 of the package previously published as experimental (github.com/cznic/exp/lldb). A Filer is an abstraction of storage. A Filer may be a part of some process' virtual address space, an OS file, a networked, remote file etc. Persistence of the storage is optional, opaque to VMM and it is specific to a concrete Filer implementation. Mechanism to allocate, reallocate (resize), deallocate (and later reclaim the unused) contiguous parts of a Filer, called blocks. Blocks are identified and referred to by a handle, an int64. In addition to the VMM like services, lldb provides volatile and non-volatile BTrees. Keys and values of a BTree are limited in size to 64kB each (a bit more actually). Support for larger keys/values, if desired, can be built atop a BTree to certain limits. A handle is the abstracted storage counterpart of a memory address. There is one fundamental difference, though. Resizing a block never results in a change to the handle which refers to the resized block, so a handle is more akin to an unique numeric id/key. Yet it shares one property of pointers - handles can be associated again with blocks after the original handle block was deallocated. In other words, a handle uniqueness domain is the state of the database and is not something comparable to e.g. an ever growing numbering sequence. Also, as with memory pointers, dangling handles can be created and blocks overwritten when such handles are used. Using a zero handle to refer to a block will not panic; however, the resulting error is effectively the same exceptional situation as dereferencing a nil pointer. Allocated/used blocks, are limited in size to only a little bit more than 64kB. Bigger semantic entities/structures must be built in lldb's client code. The content of a block has no semantics attached, it's only a fully opaque `[]byte`. Use of "scalars" applies to EncodeScalars, DecodeScalars and Collate. Those first two "to bytes" and "from bytes" functions are suggested for handling multi-valued Allocator content items and/or keys/values of BTrees (using Collate for keys). Types called "scalar" are: Included are concrete implementations of some of the VMM interfaces included to ease serving simple client code or for testing and possibly as an example. More details in the documentation of such implementations.
Package things3 provides a Go library for Things 3 on macOS with read-only database access and full URL Scheme support for creating and updating tasks. This package offers two main capabilities: All operations go through a single Client: For complex queries, use the type-safe fluent query builder: Create and update items via Things URL Scheme: Configure the client with functional options: The database path is discovered in the following order: The package uses integer-based enums that map directly to database values: For the official Things URL Scheme documentation, see: https://culturedcode.com/things/support/articles/2803573/
Package dynamodb provides the client and types for making API requests to Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability. See https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10 for more information on this service. See dynamodb package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/ To Amazon DynamoDB with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon DynamoDB client DynamoDB for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/#New Utility helpers to marshal and unmarshal AttributeValue to and from Go types can be found in the dynamodbattribute sub package. This package provides has specialized functions for the common ways of working with AttributeValues. Such as map[string]*AttributeValue, []*AttributeValue, and directly with *AttributeValue. This is helpful for marshaling Go types for API operations such as PutItem, and unmarshaling Query and Scan APIs' responses. See the dynamodbattribute package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/dynamodbattribute/ The expression package provides utility types and functions to build DynamoDB expression for type safe construction of API ExpressionAttributeNames, and ExpressionAttribute Values. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. See the expression package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/expression/
Package sdk is a client package for WASM functions running within a Tarmac server. This package provides user-friendly functions that wrap the Web Assembly Procedure Call (waPC) based functions of Tarmac. Guest WASM functions running inside Tarmac can use this library to call back the Tarmac host and perform host-level actions such as storing data within the database, logging specific data, or looking up configurations.
Package sessions provides tools to manage cookie-based web sessions. Special emphasis is placed on security by implementing OWASP recommendations, specifically the following features: In addition, the package provides the following functionality: While simple to use, the package offers a number of extensively documented configuration variables. It also does not assume specific backend technologies. That is, any session storage system may be used simply by implementing the PersistenceLayer interface (or parts of it). This package is currently not written to be run on multiple machines in a distributed fashion without a load balancer that implements sticky sessions. This may change in the future. Although some more configuration needs to happen for production readiness, the package's defaults allow you to get started very quickly. To get access to the current session, simply call Start(): By providing "true" instead of "false" to the Start() function, you can force the creation of a session, even if there previously was none. Once you have a session, you can identify a user across multiple HTTP requests. You may add values to the session, attach a user to it, cause its session ID to change, or destroy it again. For more extensive user-centered functions (for example, signing up, logging in and out, changing passwords etc.), see the subdirectory "users". Before putting your application into production, you must implement the NewSessionCookie function: You may choose a different expiry date, domain, and path but the other fields are mandatory (given that you are using TLS which you certainly should). You can change the name of the cookie by changing the SessionCookie variable. The default is the inconspicuous string "id". The following timeout values may be adjusted according to the requirements of your application: To further reduce the risk of session hijacking attacks, this package checks client IP addresses as well as user agent strings and destroys sessions if changes in these properties were detected. Refer to the AcceptRemoteIP and AcceptChangingUserAgent variables for more information. Sessions are stored in a local RAM cache (which is a simpe map) whose size is defined by the MaxSessionCacheSize variable. If you set this variable to 0, no sessions are held locally. The SessionCacheExpiry controls when a session will be purged from the cache based on the last time it was used. The cache is write-through (except for session last access times). That is, every time a change was made to a session, that change is forwarded to the package's persistence layer to be saved. The persistence layer is a collection of functions which allow the storage and retrieval of objects from a permanent data store. For example, you may use an SQL database or a key-value store. See the documentation of PersistenceLayer for details on the functions to be implemented. If you need to implement only some of the functions, you may use ExtendablePersistenceLayer instead of creating your own class. The package default is to do nothing. That is, sessions are not persisted and therefore will get lost when purged from the local cache or when the application exits. Session objects implement gob.GobEncoder/gob.GobDecoder and json.Marshaler/json.Unmarshaler. While encoding to JSON allows you to easily inspect session attributes in your database, GOB serialization is preferred as it will restore session objects precisely. (For example, the JSON package always unmarshals numbers into floats even if they were originally integers.) It is recommended that you purge your data store from expired sessions from time to time, e.g. by using a cron job, because users may abandon your website which will leave old sessions in your store. It is recommended to call PurgeSessions() before exiting the program. This will cause session last access times to be updated. This package provides a number of utility functions which may be useful in the context of session and user management. The CUID() function generates Base-62 "compact unique identifiers" suitable for user IDs. The RandomID() function generates random Base-62 strings of any length. The ReasonablePassword() function checks the strength of a password based on the recommendations of NIST SP 800-63B.
Package openplantbook provides a Go client library for the OpenPlantbook API. OpenPlantbook is a crowd-sourced database of plant care information, providing sensor data and care requirements for thousands of plants. The SDK supports two authentication methods: API Key (recommended for read-only operations): OAuth2 Client Credentials (for full API access): Get your credentials at: https://open.plantbook.io/ Search for plants: Get detailed plant information: The SDK supports extensive configuration through functional options: The SDK provides typed errors for common scenarios: Results are cached automatically: Cache can be customized or disabled: Client-side rate limiting is enabled by default (200 requests/day). This can be customized or disabled: For more information, see: https://github.com/rmrfslashbin/openplantbook-go