Package logging contains a Cloud Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Use a Client to interact with the Cloud Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Cloud Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Cloud Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. For cases when runtime environment supports out-of-process log ingestion, like logging agent, you can opt-in to write log entries to io.Writer instead of ingesting them to Cloud Logging service. Usually, you will use os.Stdout or os.Stderr as writers because Google Cloud logging agents are configured to capture logs from standard output. The entries will be Jsonified and wrote as one line strings following the structured logging format. See https://cloud.google.com/logging/docs/structured-logging#special-payload-fields for the format description. To instruct Logger to redirect log entries add RedirectAsJSON() LoggerOption`s. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: If you have proto.Message and want to send it as a protobuf payload, marshal it to anypb.Any: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Cloud logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResource type and labels. - A child entry's timestamp must be within the time interval covered by the parent request. (i.e., before the parent.Timestamp and after the parent.Timestamp - parent.HTTPRequest.Latency. This assumes the parent.Timestamp marks the end of the request.) - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY | string | string ------------------------------------------------------------------------------------------------------------------- OBJECT | string | string ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: ## Arrow batches You can retrieve data in a columnar format similar to the format a server returns. You must use `WithArrowBatches` context, similar to the following: Limitations: 1. For some queries Snowflake may decide to return data in JSON format (examples: `SHOW PARAMETERS` or `ls @stage`). You cannot use JSON with Arrow batches context. 2. Snowflake handles timestamps in a range which is higher than available space in Arrow timestamp type. Because of that special treatment should be used (see below). ### Handling timestamps in Arrow batches Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: ### Invalid UTF-8 characters in Arrow batches Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context `enableArrowBatchesUtf8Validation`, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. ### WithHigherPrecision in Arrow batches To preserve BigDecimal values within Arrow batches, use `WithHigherPrecision`. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: ## Using PUT The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. ## Using GET The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: ## Specifying temporary directory for encryption and compression Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: ## Using custom configuration for PUT/GET If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression.
Package directoryservice provides the API client, operations, and parameter types for AWS Directory Service. Directory Service Directory Service is a web service that makes it easy for you to setup and run directories in the Amazon Web Services cloud, or connect your Amazon Web Services resources with an existing self-managed Microsoft Active Directory. This guide provides detailed information about Directory Service operations, data types, parameters, and errors. For information about Directory Services features, see Directory Service (https://aws.amazon.com/directoryservice/) and the Directory Service Administration Guide (http://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html) . Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to Directory Service and other Amazon Web Services services. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services (http://aws.amazon.com/tools/) .
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package applicationdiscoveryservice provides the API client, operations, and parameter types for AWS Application Discovery Service. Amazon Web Services Application Discovery Service Amazon Web Services Application Discovery Service (Application Discovery Service) helps you plan application migration projects. It automatically identifies servers, virtual machines (VMs), and network dependencies in your on-premises data centers. For more information, see the Amazon Web Services Application Discovery Service FAQ (http://aws.amazon.com/application-discovery/faqs/) . Application Discovery Service offers three ways of performing discovery and collecting data about your on-premises servers: Agentless discovery using Amazon Web Services Application Discovery Service Agentless Collector (Agentless Collector), which doesn't require you to install an agent on each host. Agentless Collector gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless Collector doesn't collect information about network dependencies, only agent-based discovery collects that information. Agent-based discovery using the Amazon Web Services Application Discovery Agent (Application Discovery Agent) collects a richer set of data than agentless discovery, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of running processes, system performance information, resource utilization, and network dependencies. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the Amazon Web Services cloud. For more information, see Amazon Web Services Application Discovery Agent (https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-agent.html) . Amazon Web Services Partner Network (APN) solutions integrate with Application Discovery Service, enabling you to import details of your on-premises environment directly into Amazon Web Services Migration Hub (Migration Hub) without using Agentless Collector or Application Discovery Agent. Third-party application discovery tools can query Amazon Web Services Application Discovery Service, and they can write to the Application Discovery Service database using the public API. In this way, you can import data into Migration Hub and view it, so that you can associate applications with servers and track migrations. Working With This Guide This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see Amazon Web Services SDKs (http://aws.amazon.com/tools/#SDKs) . This guide is intended for use with the Amazon Web Services Application Discovery Service User Guide (https://docs.aws.amazon.com/application-discovery/latest/userguide/) . All data is handled according to the Amazon Web Services Privacy Policy (https://aws.amazon.com/privacy/) . You can operate Application Discovery Service offline to inspect collected data before it is shared with the service.
Package cninfra is the parent package for all packages that are parts of the CN-Infra platform - a Golang platform for building cloud-native microservices. The CN-Infra platform is a modular platform comprising a core and a set of plugins. The core provides lifecycle management for plugins, the plugins provide the functionality of the platform. A plugin can consist of one or more Golang packages. Out of the box, the CN-Infra platform provides reusable plugins for logging, health checks, messaging (e.g. Kafka), a common front-end API and back-end connectivity to various data stores (Etcd, Cassandra, Redis, ...), and REST and gRPC APIs. The CN-Infra platform can be extended by adding a new platform plugin, for example a new data store client or a message bus adapter. Also, each application built on top of the platform is basically just a set of application-specific plugins.
Package vfs provides a pluggable, extensible, and opinionated set of file system functionality for Go across a number of file system types such as os, S3, and GCS. When building our platform, initially we wrote a library that was something to the effect of Not only was ugly but because the behaviors of each "file system" were different and we had to constantly alter the file locations and pass a bucket string (even if the fs didn't know what a bucket was). We found a handful of third-party libraries that were interesting but none of them had everything we needed/wanted. Of particular inspiration was https://github.com/spf13/afero in its composition of the super-powerful stdlib io.* interfaces. Unfortunately, it didn't support Google Cloud Storage and there was still a lot of passing around of strings and structs. Few, if any, of the vfs-like libraries provided interfaces to easily and confidently create new file system backends. What we needed/wanted was the following(and more): Pre 1.17: Post 1.17: Upgrading from v5 to v6 With v6.0.0, sftp.Options struct changed to accept an array of Key Exchange algorithms rather than a string. To update, change the syntax of the auth commands. becomes We provide vfssimple as basic way of initializing file system backends (see each implementations's docs about authentication). vfssimple pulls in every c2fo/vfs backend. If you need to reduce the backend requirements (and app memory footprint) or add a third party backend, you'll need to implement your own "factory". See backend doc for more info. You can then use those file systems to initialize locations which you'll be referencing frequently, or initialize files directly You can perform a number of actions without any consideration for the underlying system's api or implementation details. File's io.* interfaces may be used directly: * none so far Feel free to send a pull request if you want to add your backend to the list. # Ideas See https://github.com/C2FO/vfs/discussions Contributing Distributed under the MIT license. See `http://github.com/c2fo/vfs/License.md for more information. * absolute path - A path is said to be absolute if it provides the entire context need to find a file, including the file system root. An absolute path must begin with a slash and may include . and .. directories. * file path - A file path ends with a filename and therefore may not end with a slash. It may be relative or absolute. * location path - A location/dir path must end with a slash. It may be relative or absolute. * relative path - A relative path is a way to locate a dir or file relative to another directory. A relative path may not begin with a slash but may include . and .. directories. * URI - A Uniform Resource Identifier (URI) is a string of characters that unambiguously identifies a particular resource. To guarantee uniformity, all URIs follow a predefined set of syntax rules, but also maintain extensibility through a separately defined hierarchical naming scheme (e.g. http://).
Package cninfra is the parent package for all packages that are parts of the CN-Infra platform - a Golang platform for building cloud-native microservices. The CN-Infra platform is a modular platform comprising a core and a set of plugins. The core provides lifecycle management for plugins, the plugins provide the functionality of the platform. A plugin can consist of one or more Golang packages. Out of the box, the CN-Infra platform provides reusable plugins for logging, health checks, messaging (e.g. Kafka), a common front-end API and back-end connectivity to various data stores (Etcd, Cassandra, Redis, ...), and REST and gRPC APIs. The CN-Infra platform can be extended by adding a new platform plugin, for example a new data store client or a message bus adapter. Also, each application built on top of the platform is basically just a set of application-specific plugins.
Package gcpjwt has Google Cloud Platform (Cloud KMS, IAM API, & AppEngine App Identity API) jwt-go implementations. Should work across virtually all environments, on or off of Google's Cloud Platform. It is highly recommended that you override the default algorithm implementations that you want to leverage a GCP service for in dgrijalva/jwt-go. You otherwise will have to manually pick the verification method for your JWTs and they will place non-standard headers in the rendered JWT (with the exception of signJwt from the IAM API which overwrites the header with its own). You should only need to override the algorithm(s) you plan to use. It is also incorrect to override overlapping, algorithms such as `gcpjwt.SigningMethodKMSRS256.Override()` and `gcpjwt.SigningMethodIAMJWT.Override()` Example: As long as a you override a default algorithm implementation as shown above, using the dgrijalva/jwt-go is mostly unchanged. Token creation is more/less done the same way as in the dgrijalva/jwt-go package. The key that you need to provide is always going to be a context.Context, usuaully with a configuration object loaded in: Example: Finally, the steps to validate a token should be straight forward. This library provides you with helper jwt.Keyfunc implementations to do the heavy lifting around getting the public certificates for verification: Example:
Package vfs provides a pluggable, extensible, and opinionated set of filesystem functionality for Go across a number of filesystem types such as os, S3, and GCS. When building our platform, initially we wrote a library that was something to the effect of Not only was ugly but because the behaviors of each "filesystem" were different and we had to constantly alter the file locations and pass a bucket string (even if the fs didn't know what a bucket was). We found a handful of third-party libraries that were interesting but none of them had everything we needed/wanted. Of particular inspiration was https://github.com/spf13/afero in its composition of the super-powerful stdlib io.* interfaces. Unfortunately, it didn't support Google Cloud Storage and there was still a lot of passing around of strings and structs. Few, if any, of the vfs-like libraries provided interfaces to easily and confidently create new filesystem backends. What we needed/wanted was the following(and more): Go install: Glide installation: We provide vfssimple as basic way of initializing filesystem backends (see each implementations's docs about authentication). vfssimple pulls in every c2fo/vfs backend. If you need to reduce the backend requirements (and app memory footprint) or add a third party backend, you'll need to implement your own "factory". See backend doc for more info. You can then use those file systems to initialize locations which you'll be referencing frequently, or initialize files directly With a number of files and locations between s3 and the local file system you can perform a number of actions without any consideration for the system's api or implementation details. Third-party Backends Feel free to send a pull request if you want to add your backend to the list. Things to add: Brought to you by the Enterprise Pipeline team at C2FO: John Judd - john.judd@c2fo.com Jason Coble - [@jasonkcoble](https://twitter.com/jasonkcoble) - jason@c2fo.com Chris Roush – chris.roush@c2fo.com https://github.com/c2fo/ Contributing Distributed under the MIT license. See `http://github.com/c2fo/vfs/License.md for more information.
Package vfs provides a pluggable, extensible, and opinionated set of file system functionality for Go across a number of file system types such as os, S3, and GCS. When building our platform, initially we wrote a library that was something to the effect of Not only was ugly but because the behaviors of each "file system" were different and we had to constantly alter the file locations and pass a bucket string (even if the fs didn't know what a bucket was). We found a handful of third-party libraries that were interesting but none of them had everything we needed/wanted. Of particular inspiration was https://github.com/spf13/afero in its composition of the super-powerful stdlib io.* interfaces. Unfortunately, it didn't support Google Cloud Storage and there was still a lot of passing around of strings and structs. Few, if any, of the vfs-like libraries provided interfaces to easily and confidently create new file system backends. What we needed/wanted was the following(and more): Go install: We provide vfssimple as basic way of initializing file system backends (see each implementations's docs about authentication). vfssimple pulls in every c2fo/vfs backend. If you need to reduce the backend requirements (and app memory footprint) or add a third party backend, you'll need to implement your own "factory". See backend doc for more info. You can then use those file systems to initialize locations which you'll be referencing frequently, or initialize files directly You can perform a number of actions without any consideration for the underlying system's api or implementation details. File's io.* interfaces may be used directly: * none so far Feel free to send a pull request if you want to add your backend to the list. Things to add: Brought to you by the Enterprise Pipeline team at C2FO: * John Judd - john.judd@c2fo.com * Jason Coble - [@jasonkcoble](https://twitter.com/jasonkcoble) - jason@c2fo.com * Chris Roush – chris.roush@c2fo.com * Moe Zeid - moe.zeid@c2fo.com https://github.com/c2fo/ Contributing Distributed under the MIT license. See `http://github.com/c2fo/vfs/License.md for more information. * absolute path - A path is said to be absolute if it provides the entire context need to find a file, including the file system root. An absolute path must begin with a slash and may include . and .. directories. * file path - A file path ends with a filename and therefore may not end with a slash. It may be relative or absolute. * location path - A location/dir path must end with a slash. It may be relative or absolute. * relative path - A relative path is a way to locate a dir or file relative to another directory. A relative path may not begin with a slash but may include . and .. directories. * URI - A Uniform Resource Identifier (URI) is a string of characters that unambiguously identifies a particular resource. To guarantee uniformity, all URIs follow a predefined set of syntax rules, but also maintain extensibility through a separately defined hierarchical naming scheme (e.g. http://).
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Cozy Cloud is a personal platform as a service with a focus on data. Cozy Cloud can be seen as 4 layers, from inside to outside: 1. A place to keep your personal data 2. A core API to handle the data 3. Your web apps, and also the mobile & desktop clients 4. A coherent User Experience It's also a set of values: Simple, Versatile, Yours. These values mean a lot for Cozy Cloud in all aspects. From an architectural point, it declines to: - Simple to deploy and understand, not built as a galaxy of optimized microservices managed by kubernetes that only experts can debug. - Versatile, can be hosted on a Raspberry Pi for geeks to massive scale on multiple servers by specialized hosting. Users can install apps. - Yours, you own your data and you control it. If you want to take back your data to go elsewhere, you can.
Package gcpjwt has Google Cloud Platform (Cloud KMS, IAM API, & AppEngine App Identity API) jwt-go implementations. Should work across virtually all environments, on or off of Google's Cloud Platform. It is highly recommended that you override the default algorithm implementations that you want to leverage a GCP service for in dgrijalva/jwt-go. You otherwise will have to manually pick the verification method for your JWTs and they will place non-standard headers in the rendered JWT (with the exception of signJwt from the IAM API which overwrites the header with its own). You should only need to override the algorithm(s) you plan to use. It is also incorrect to override overlapping, algorithms such as `gcpjwt.SigningMethodKMSRS256.Override()` and `gcpjwt.SigningMethodIAMJWT.Override()` Example: As long as a you override a default algorithm implementation as shown above, using the dgrijalva/jwt-go is mostly unchanged. Token creation is more/less done the same way as in the dgrijalva/jwt-go package. The key that you need to provide is always going to be a context.Context, usuaully with a configuration object loaded in: Example: Finally, the steps to validate a token should be straight forward. This library provides you with helper jwt.Keyfunc implementations to do the heavy lifting around getting the public certificates for verification: Example:
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package vfs provides a pluggable, extensible, and opinionated set of filesystem functionality for Go across a number of filesystem types such as os, S3, and GCS. When building our platform, initially we wrote a library that was something to the effect of Not only was ugly but because the behaviors of each "filesystem" were different and we had to constantly alter the file locations and pass a bucket string (even if the fs didn't know what a bucket was). We found a handful of third-party libraries that were interesting but none of them had everything we needed/wanted. Of particular inspiration was https://github.com/spf13/afero in its composition of the super-powerful stdlib io.* interfaces. Unfortunately, it didn't support Google Cloud Storage and there was still a lot of passing around of strings and structs. Few, if any, of the vfs-like libraries provided interfaces to easily and confidently create new filesystem backends. What we needed/wanted was the following(and more): Go install: Glide installation: We provide vfssimple as basic way of initializing filesystem backends (see each implementations's docs about authentication). vfssimple pulls in every c2fo/vfs backend. If you need to reduce the backend requirements (and app memory footprint) or add a third party backend, you'll need to implement your own "factory". See backend doc for more info. You can then use those file systems to initialize locations which you'll be referencing frequently, or initialize files directly With a number of files and locations between s3 and the local file system you can perform a number of actions without any consideration for the system's api or implementation details. Third-party Backends Feel free to send a pull request if you want to add your backend to the list. Things to add: Brought to you by the Enterprise Pipeline team at C2FO: John Judd - john.judd@c2fo.com Jason Coble - [@jasonkcoble](https://twitter.com/jasonkcoble) - jason@c2fo.com Chris Roush – chris.roush@c2fo.com https://github.com/c2fo/ Contributing Distributed under the MIT license. See `http://github.com/c2fo/vfs/License.md for more information.
Package vfs provides a pluggable, extensible, and opinionated set of filesystem functionality for Go across a number of filesystem types such as os, S3, and GCS. When building our platform, initially we wrote a library that was something to the effect of Not only was ugly but because the behaviors of each "filesystem" were different and we had to constantly alter the file locations and pass a bucket string (even if the fs didn't know what a bucket was). We found a handful of third-party libraries that were interesting but none of them had everything we needed/wanted. Of particular inspiration was https://github.com/spf13/afero in its composition of the super-powerful stdlib io.* interfaces. Unfortunately, it didn't support Google Cloud Storage and there was still a lot of passing around of strings and structs. Few, if any, of the vfs-like libraries provided interfaces to easily and confidently create new filesystem backends. What we needed/wanted was the following(and more): Go install: Glide installation: We provide vfssimple as basic way of initializing filesystem backends (see each implementations's docs about authentication). vfssimple pulls in every c2fo/vfs backend. If you need to reduce the backend requirements (and app memory footprint) or add a third party backend, you'll need to implement your own "factory". See backend doc for more info. You can then use those file systems to initialize locations which you'll be referencing frequently, or initialize files directly With a number of files and locations between s3 and the local file system you can perform a number of actions without any consideration for the system's api or implementation details. Third-party Backends Feel free to send a pull request if you want to add your backend to the list. Things to add: Brought to you by the Enterprise Pipeline team at C2FO: John Judd - john.judd@c2fo.com Jason Coble - [@jasonkcoble](https://twitter.com/jasonkcoble) - jason@c2fo.com Chris Roush – chris.roush@c2fo.com https://github.com/c2fo/ Contributing Distributed under the MIT license. See `http://github.com/c2fo/vfs/License.md for more information.
Package gcpjwt has Google Cloud Platform (Cloud KMS, IAM API, & AppEngine App Identity API) jwt-go implementations. Should work across virtually all environments, on or off of Google's Cloud Platform. It is highly recommended that you override the default algorithm implementations that you want to leverage a GCP service for in dgrijalva/jwt-go. You otherwise will have to manually pick the verification method for your JWTs and they will place non-standard headers in the rendered JWT (with the exception of signJwt from the IAM API which overwrites the header with its own). You should only need to override the algorithm(s) you plan to use. It is also incorrect to override overlapping, algorithms such as `gcpjwt.SigningMethodKMSRS256.Override()` and `gcpjwt.SigningMethodIAMJWT.Override()` Example: As long as a you override a default algorithm implementation as shown above, using the dgrijalva/jwt-go is mostly unchanged. Token creation is more/less done the same way as in the dgrijalva/jwt-go package. The key that you need to provide is always going to be a context.Context, usuaully with a configuration object loaded in: Example: Finally, the steps to validate a token should be straight forward. This library provides you with helper jwt.Keyfunc implementations to do the heavy lifting around getting the public certificates for verification: Example:
Stateful Functions Go SDK Stateful Functions is an API that simplifies the building of **distributed stateful applications** with a **runtime built for serverless architectures**. It brings together the benefits of stateful stream processing, the processing of large datasets with low latency and bounded resource constraints, along with a runtime for modeling stateful entities that supports location transparency, concurrency, scaling, and resiliency. It is designed to work with modern architectures, like cloud-native deployments and popular event-driven FaaS platforms like AWS Lambda and KNative, and to provide out-of-the-box consistent state and messaging while preserving the serverless experience and elasticity of these platforms. The JVM-based Stateful Functions implementation has a RequestReply extension (a protocol and an implementation) that allows calling into any HTTP endpoint that implements that protocol. Although it is possible to implement this protocol independently, this is a minimal library for the Go programing language that: - Allows users to define and declare their functions in a convenient way. - Dispatches an invocation request sent from the JVM to the appropriate function previously declared.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package gcppubsubdemo is a tool for teaching and learning Google Cloud Platform's pub/sub API. This package is not meant to be used as an API but it is documented like one for instructional purposes. Quick usage examples for the package are shown below. For more information, see the documentation for the individual items. To use the API for publishing, call the GetPublisher function to obtain a Publisher function to call. Call the Publisher function giving the data to publish as the only parameter. To shut down the Publisher, call the Publisher function with nil as the parameter. Failing to do so will leave pub/sub code and network operations initialized. Subscribe to a topic by callng the Subscribe function, giving it a callback to execute when data is received. The callback should return true when it no longer wants to retrieve messages and shut down the subscription. Basic logging is implemented using the standard log package. Logging is disabled by default. When enabled with a call to LogLevel(), log output goes to os.Stderr but can be redirected by using LogOutput().
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package gcpjwt has Google Cloud Platform (Cloud KMS, IAM API, & AppEngine App Identity API) jwt-go implementations. Should work across virtually all environments, on or off of Google's Cloud Platform. It is highly recommended that you override the default algorithm implementations that you want to leverage a GCP service for in dgrijalva/jwt-go. You otherwise will have to manually pick the verification method for your JWTs and they will place non-standard headers in the rendered JWT (with the exception of signJwt from the IAM API which overwrites the header with its own). You should only need to override the algorithm(s) you plan to use. It is also incorrect to override overlapping, algorithms such as `gcpjwt.SigningMethodKMSRS256.Override()` and `gcpjwt.SigningMethodIAMJWT.Override()` Example: As long as a you override a default algorithm implementation as shown above, using the dgrijalva/jwt-go is mostly unchanged. Token creation is more/less done the same way as in the dgrijalva/jwt-go package. The key that you need to provide is always going to be a context.Context, usuaully with a configuration object loaded in: Example: Finally, the steps to validate a token should be straight forward. This library provides you with helper jwt.Keyfunc implementations to do the heavy lifting around getting the public certificates for verification: Example:
Package cloud contains Google Cloud Platform APIs related types and common functions.
Package cninfra is the parent package for all packages that are parts of the CN-Infra platform - a Golang platform for building cloud-native microservices. The CN-Infra platform is a modular platform comprising a core and a set of plugins. The core provides lifecycle management for plugins, the plugins provide the functionality of the platform. A plugin can consist of one or more Golang packages. Out of the box, the CN-Infra platform provides reusable plugins for logging, health checks, messaging (e.g. Kafka), a common front-end API and back-end connectivity to various data stores (Etcd, Cassandra, Redis, ...), and REST and gRPC APIs. The CN-Infra platform can be extended by adding a new platform plugin, for example a new data store client or a message bus adapter. Also, each application built on top of the platform is basically just a set of application-specific plugins.
Package libretto is a Golang library to create Virtual Machines (VM) on any cloud and Virtual Machine hosting platforms such as AWS, Azure, OpenStack, vSphere, or VirtualBox. Different providers have different utilities and API interfaces to achieve that, but the abstractions of their interfaces are quite similar. See the README.md file for help getting started.
Package directoryservice provides the client and types for making API requests to AWS Directory Service. AWS Directory Service is a web service that makes it easy for you to setup and run directories in the AWS cloud, or connect your AWS resources with an existing on-premises Microsoft Active Directory. This guide provides detailed information about AWS Directory Service operations, data types, parameters, and errors. For information about AWS Directory Services features, see AWS Directory Service (https://aws.amazon.com/directoryservice/) and the AWS Directory Service Administration Guide (http://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html). AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to AWS Directory Service and other AWS services. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services (http://aws.amazon.com/tools/). See https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16 for more information on this service. See directoryservice package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/directoryservice/ To AWS Directory Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Directory Service client DirectoryService for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/directoryservice/#New
Package gcpjwt has Google Cloud Platform (Cloud KMS, IAM API, & AppEngine App Identity API) jwt-go implementations. Should work across virtually all environments, on or off of Google's Cloud Platform. It is highly recommended that you override the default algorithm implementations that you want to leverage a GCP service for in dgrijalva/jwt-go. You otherwise will have to manually pick the verification method for your JWTs and they will place non-standard headers in the rendered JWT (with the exception of signJwt from the IAM API which overwrites the header with its own). You should only need to override the algorithm(s) you plan to use. It is also incorrect to override overlapping, algorithms such as `gcpjwt.SigningMethodKMSRS256.Override()` and `gcpjwt.SigningMethodIAMJWT.Override()` Example: As long as a you override a default algorithm implementation as shown above, using the dgrijalva/jwt-go is mostly unchanged. Token creation is more/less done the same way as in the dgrijalva/jwt-go package. The key that you need to provide is always going to be a context.Context, usuaully with a configuration object loaded in: Example: Finally, the steps to validate a token should be straight forward. This library provides you with helper jwt.Keyfunc implementations to do the heavy lifting around getting the public certificates for verification: Example:
Package libretto is a Golang library to create Virtual Machines (VM) on any cloud and Virtual Machine hosting platforms such as AWS, Azure, OpenStack, vSphere, or VirtualBox. Different providers have different utilities and API interfaces to achieve that, but the abstractions of their interfaces are quite similar. See the README.md file for help getting started.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package cninfra is the parent package for all packages that are parts of the CN-Infra platform - a Golang platform for building cloud-native microservices. The CN-Infra platform is a modular platform comprising a core and a set of plugins. The core provides lifecycle management for plugins, the plugins provide the functionality of the platform. A plugin can consist of one or more Golang packages. Out of the box, the CN-Infra platform provides reusable plugins for logging, health checks, messaging (e.g. Kafka), a common front-end API and back-end connectivity to various data stores (Etcd, Cassandra, Redis, ...), and REST and gRPC APIs. The CN-Infra platform can be extended by adding a new platform plugin, for example a new data store client or a message bus adapter. Also, each application built on top of the platform is basically just a set of application-specific plugins.
Package gcpjwt has Google Cloud Platform (Cloud KMS, IAM API, & AppEngine App Identity API) jwt-go implementations. Should work across virtually all environments, on or off of Google's Cloud Platform. It is highly recommended that you override the default algorithm implementations that you want to leverage a GCP service for in dgrijalva/jwt-go. You otherwise will have to manually pick the verification method for your JWTs and they will place non-standard headers in the rendered JWT (with the exception of signJwt from the IAM API which overwrites the header with its own). You should only need to override the algorithm(s) you plan to use. It is also incorrect to override overlapping, algorithms such as `gcpjwt.SigningMethodKMSRS256.Override()` and `gcpjwt.SigningMethodIAMJWT.Override()` Example: As long as a you override a default algorithm implementation as shown above, using the dgrijalva/jwt-go is mostly unchanged. Token creation is more/less done the same way as in the dgrijalva/jwt-go package. The key that you need to provide is always going to be a context.Context, usuaully with a configuration object loaded in: Example: Finally, the steps to validate a token should be straight forward. This library provides you with helper jwt.Keyfunc implementations to do the heavy lifting around getting the public certificates for verification: Example:
Package libretto is a Golang library to create Virtual Machines (VM) on any cloud and Virtual Machine hosting platforms such as AWS, Azure, OpenStack, vSphere, or VirtualBox. Different providers have different utilities and API interfaces to achieve that, but the abstractions of their interfaces are quite similar. See the README.md file for help getting started.
Cozy Cloud is a personal platform as a service with a focus on data. Cozy Cloud can be seen as 4 layers, from inside to outside: 1. A place to keep your personal data 2. A core API to handle the data 3. Your web apps, and also the mobile & desktop clients 4. A coherent User Experience It's also a set of values: Simple, Versatile, Yours. These values mean a lot for Cozy Cloud in all aspects. From an architectural point, it declines to: - Simple to deploy and understand, not built as a galaxy of optimized microservices managed by kubernetes that only experts can debug. - Versatile, can be hosted on a Raspberry Pi for geeks to massive scale on multiple servers by specialized hosting. Users can install apps. - Yours, you own your data and you control it. If you want to take back your data to go elsewhere, you can.
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package cloud contains Google Cloud Platform APIs related types and common functions.
Package libretto is a Golang library to create Virtual Machines (VM) on any cloud and Virtual Machine hosting platforms such as AWS, Azure, OpenStack, vSphere, or VirtualBox. Different providers have different utilities and API interfaces to achieve that, but the abstractions of their interfaces are quite similar. See the README.md file for help getting started.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package libretto is a Golang library to create Virtual Machines (VM) on any cloud and Virtual Machine hosting platforms such as AWS, Azure, OpenStack, vSphere, or VirtualBox. Different providers have different utilities and API interfaces to achieve that, but the abstractions of their interfaces are quite similar. See the README.md file for help getting started.
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY | string | string ------------------------------------------------------------------------------------------------------------------- OBJECT | string | string ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts multiple queries, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example:
Package libretto is a Golang library to create Virtual Machines (VM) on any cloud and Virtual Machine hosting platforms such as AWS, Azure, OpenStack, vSphere, or VirtualBox. Different providers have different utilities and API interfaces to achieve that, but the abstractions of their interfaces are quite similar. See the README.md file for help getting started.
Package cloud contains Google Cloud Platform APIs related types and common functions.
Package applicationdiscoveryservice provides the client and types for making API requests to AWS Application Discovery Service. AWS Application Discovery Service helps you plan application migration projects by automatically identifying servers, virtual machines (VMs), software, and software dependencies running in your on-premises data centers. Application Discovery Service also collects application performance data, which can help you assess the outcome of your migration. The data collected by Application Discovery Service is securely retained in an Amazon-hosted and managed database in the cloud. You can export the data as a CSV or XML file into your preferred visualization tool or cloud-migration solution to plan your migration. For more information, see the Application Discovery Service FAQ (http://aws.amazon.com/application-discovery/faqs/). Application Discovery Service offers two modes of operation. Agentless discovery mode is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless discovery doesn't collect information about software and software dependencies. It also doesn't work in non-VMware environments. We recommend that you use agent-based discovery for non-VMware environments and if you want to collect information about software and software dependencies. You can also run agent-based and agentless discovery simultaneously. Use agentless discovery to quickly complete the initial infrastructure assessment and then install agents on select hosts to gather information about software and software dependencies. Agent-based discovery mode collects a richer set of data than agentless discovery by using Amazon software, the AWS Application Discovery Agent, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of installed software applications, system and process performance, resource utilization, and network dependencies between workloads. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud. Application Discovery Service integrates with application discovery solutions from AWS Partner Network (APN) partners. Third-party application discovery tools can query Application Discovery Service and write to the Application Discovery Service database using a public API. You can then import the data into either a visualization tool or cloud-migration solution. Application Discovery Service doesn't gather sensitive information. All data is handled according to the AWS Privacy Policy (http://aws.amazon.com/privacy/). You can operate Application Discovery Service using offline mode to inspect collected data before it is shared with the service. Your AWS account must be granted access to Application Discovery Service, a process called whitelisting. This is true for AWS partners and customers alike. To request access, sign up for AWS Application Discovery Service here (http://aws.amazon.com/application-discovery/preview/). We send you information about how to get started. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the AWS SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see AWS SDKs (http://aws.amazon.com/tools/#SDKs). This guide is intended for use with the AWS Application Discovery Service User Guide (http://docs.aws.amazon.com/application-discovery/latest/userguide/). See https://docs.aws.amazon.com/goto/WebAPI/discovery-2015-11-01 for more information on this service. See applicationdiscoveryservice package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/ To AWS Application Discovery Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Application Discovery Service client ApplicationDiscoveryService for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/#New
Package gcpjwt has Google Cloud Platform (Cloud KMS, IAM API, & AppEngine App Identity API) jwt-go implementations. Should work across virtually all environments, on or off of Google's Cloud Platform. It is highly recommended that you override the default algorithm implementations that you want to leverage a GCP service for in dgrijalva/jwt-go. You otherwise will have to manually pick the verification method for your JWTs and they will place non-standard headers in the rendered JWT (with the exception of signJwt from the IAM API which overwrites the header with its own). You should only need to override the algorithm(s) you plan to use. It is also incorrect to override overlapping, algorithms such as `gcpjwt.SigningMethodKMSRS256.Override()` and `gcpjwt.SigningMethodIAMJWT.Override()` Example: As long as a you override a default algorithm implementation as shown above, using the dgrijalva/jwt-go is mostly unchanged. Token creation is more/less done the same way as in the dgrijalva/jwt-go package. The key that you need to provide is always going to be a context.Context, usuaully with a configuration object loaded in: Example: Finally, the steps to validate a token should be straight forward. This library provides you with helper jwt.Keyfunc implementations to do the heavy lifting around getting the public certificates for verification: Example: