Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package cmd runs external commands with concurrent access to output and status. It wraps the Go standard library os/exec.Command to correctly handle reading output (STDOUT and STDERR) while a command is running and killing a command. All operations are safe to call from multiple goroutines. A basic example that runs env and prints its output: Commands can be ran synchronously (blocking) or asynchronously (non-blocking): Start returns a channel to which the final Status is sent when the command finishes for any reason. The first example blocks receiving on the channel. The second example is non-blocking because it saves the channel and receives on it later. Only one final status is sent to the channel; use Done for multiple goroutines to wait for the command to finish, then call Status to get the final status.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This library is offered as-is, and without a guarantee. It will need an independent security review before it should be considered ready for use in security-critical applications. If you integrate Kyber into your application it is YOUR RESPONSIBILITY to arrange for that audit. If you notice a possible security problem, please report it to dedis-security@epfl.ch.
Package is provides a lightweight extension to the standard library's testing capabilities. Comments on the assertion lines are used to add a description. The following failing test: Will output: The following code shows a range of useful ways you can use the helper methods:
Package chaincfg defines chain configuration parameters. In addition to the main Decred network, which is intended for the transfer of monetary value, there also exists two currently active standard networks: regression test and testnet (version 0). These networks are incompatible with each other (each sharing a different genesis block) and software should handle errors where input intended for one network is used on an application instance running on a different network. For library packages, chaincfg provides the ability to lookup chain parameters and encoding magics when passed a *Params. Older APIs not updated to the new convention of passing a *Params may lookup the parameters for a wire.DecredNet using ParamsForNet, but be aware that this usage is deprecated and will be removed from chaincfg in the future. For main packages, a (typically global) var may be assigned the address of one of the standard Param vars for use as the application's "active" network. When a network parameter is needed, it may then be looked up through this variable (either directly, or hidden in a library call). If an application does not use one of the three standard Decred networks, a new Params struct may be created which defines the parameters for the non-standard network. As a general rule of thumb, all network parameters should be unique to the network, but parameter collisions can still occur (unfortunately, this is the case with regtest and testnet sharing magics).
Package properties provides functions for reading and writing ISO-8859-1 and UTF-8 encoded .properties files and has support for recursive property expansion. Java properties files are ISO-8859-1 encoded and use Unicode literals for characters outside the ISO character set. Unicode literals can be used in UTF-8 encoded properties files but aren't necessary. To load a single properties file use MustLoadFile(): To load multiple properties files use MustLoadFiles() which loads the files in the given order and merges the result. Missing properties files can be ignored if the 'ignoreMissing' flag is set to true. Filenames can contain environment variables which are expanded before loading. All of the different key/value delimiters ' ', ':' and '=' are supported as well as the comment characters '!' and '#' and multi-line values. Properties stores all comments preceding a key and provides GetComments() and SetComments() methods to retrieve and update them. The convenience functions GetComment() and SetComment() allow access to the last comment. The WriteComment() method writes properties files including the comments and with the keys in the original order. This can be used for sanitizing properties files. Property expansion is recursive and circular references and malformed expressions are not allowed and cause an error. Expansion of environment variables is supported. The default property expansion format is ${key} but can be changed by setting different pre- and postfix values on the Properties object. Properties provides convenience functions for getting typed values with default values if the key does not exist or the type conversion failed. As an alternative properties may be applied with the standard library's flag implementation at any time. Properties provides several MustXXX() convenience functions which will terminate the app if an error occurs. The behavior of the failure is configurable and the default is to call log.Fatal(err). To have the MustXXX() functions panic instead of logging the error set a different ErrorHandler before you use the Properties package. You can also provide your own ErrorHandler function. The only requirement is that the error handler function must exit after handling the error. Properties can also be loaded into a struct via the `Decode` method, e.g. See `Decode()` method for the full documentation. The following documents provide a description of the properties file format. http://en.wikipedia.org/wiki/.properties http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html#load%28java.io.Reader%29
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). If you want to cache your MFA logins, use AuthTypeUsernamePasswordMFA authenticator. To use programmatic access tokens, specify programmatic_access_token. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth with token, specify oauth and provide an OAuth Access Token (see the token parameter below). To authenticate via full OAuth flow, specify oauth_authorization_code or oauth_client_credentials and fill relevant parameters (oauthClientId, oauthClientSecret, oauthAuthorizationUrl, oauthTokenRequestUrl, oauthRedirectUri, oauthScope). Specify URLs if you want to use external OAuth2 IdP, otherwise Snowflake will be used as a default IdP. If oauthScope is not configured, the role is used (giving session:role:<roleName> scope). For more information, please reach to official Snowflake documentation. application: Identifies your application to Snowflake Support. disableOCSPChecks: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. insecureMode: deprecated. Use disableOCSPChecks instead. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTERATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: `GOSNOWFLAKE_SKIP_REGISTERATION` should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables, `SNOWFLAKE_HOME` (`connections.toml` file directory) and `SNOWFLAKE_DEFAULT_CONNECTION_NAME` (DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. If you want to define S3 client logging, override S3LoggingMode variable using configuration: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode Example: A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithMapValuesNullable and WithArrayValuesNullable contexts to handle null values in, respectively, maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. When working with the arrow columnar format in go driver, ArrowBatch structs are used. These are structs mostly corresponding to data chunks received from the backend. They allow for access to specific arrow.Record structs. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `WithArrowBatches` context, similar to the following: This returns []*ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -WithArrowBatchesTimestampOption -WithHigherPrecision -WithArrowBatchesUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context WithArrowBatchesUtf8Validation, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. WHen using NUMBERs with non zero scale, the value is returned as an integer type and a scale is provided in record metadata. Example. When we have a 123.45 value that comes from NUMBER(9, 4), it will be represented as 1234500 with scale equal to 4. It is a client responsibility to interpret it correctly. Also - see limitations section above. How to handle JSON responses in Arrow batches: Due to technical limitations Snowflake backend may return JSON even if client expects Arrow. In that case Arrow batches are not available and the error with code ErrNonArrowResponseInArrowBatches is returned. The response is parsed to regular rows. You can read rows in a way described in transform_batches_to_rows.go example. This has a very strong limitation though - this is a very low level API (Go driver API), so there are no conversions ready. All values are returned as strings. Alternative approach is to rerun a query, but without enabling Arrow batches and use a general Go SQL API instead of driver API. It can be optimized by using `WithRequestID`, so backend returns results from cache. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression. Default behaviour is to propagate the potential underlying errors encountered during executing calls associated with the PUT or GET commands to the caller, for increased awareness and easier handling or troubleshooting them. The behaviour is governed by the `RaisePutGetError` flag on `SnowflakeFileTransferOptions` (default: `true`) If you wish to ignore those errors instead, you can set `RaisePutGetError: false`. Example snippet:
Package complete is everything for bash completion and Go. Writing bash completion scripts is a hard work, usually done in the bash scripting language. This package provides: * A library for bash completion for Go programs. * A tool for writing bash completion script in the Go language. For any Go or non Go program. * Bash completion for the `go` command line (See ./gocomplete). * Library for bash-completion enabled flags (See ./compflag). * Enables an easy way to install/uninstall the completion of the command. The library and tools are extensible such that any program can add its one logic, completion types or methologies. ./gocomplete is the script for bash completion for the `go` command line. This is an example that uses the `complete` package on the `go` command - the `complete` package can also be used to implement any completions, see #usage. Install: 1. Type in your shell: 2. Restart your shell Uninstall by `COMP_UNINSTALL=1 gocomplete` Features: - Complete `go` command, including sub commands and flags. - Complete packages names or `.go` files when necessary. - Complete test names after `-run` flag. Supported shells: - [x] bash - [x] zsh - [x] fish The installation of completion for a command line tool is done automatically by this library by running the command line tool with the `COMP_INSTALL` environment variable set. Uninstalling the completion is similarly done by the `COMP_UNINSTALL` environment variable. For example, if a tool called `my-cli` uses this library, the completion can install by running `COMP_INSTALL=1 my-cli`. Add bash completion capabilities to any Go program. See ./example/command. This package also enables to complete flags defined by the standard library `flag` package. To use this feature, simply call `complete.CommandLine` before `flag.Parse`. (See ./example/stdlib). If flag value completion is desired, it can be done by providing the standard library `flag.Var` function a `flag.Value` that also implements the `complete.Predictor` interface. For standard flag with values, it is possible to use the `github.com/posener/complete/v2/compflag` package. (See ./example/compflag). Instead of calling both `complete.CommandLine` and `flag.Parse`, one can call just `compflag.Parse` which does them both. For command line bash completion testing use the `complete.Test` function.
Package lumberjack provides a rolling logger. Note that this is v3.0 of lumberjack. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. Letting outside processes write to or manipulate the file that lumberjack writes to will also result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package netaddr contains a IP address type that's in many ways better than the Go standard library's net.IP type. Building on that IP type, the package also contains IPPrefix, IPPort, IPRange, and IPSet types. Notably, this package's IP type takes less memory, is immutable, comparable (supports == and being a map key), and more. See https://github.com/inetaf/netaddr for background. IP and IPPort are the only types in this package that support IPv6 zones. Other types silently drop any passed-in zones.
Package semver provides the ability to work with Semantic Versions (http://semver.org) in Go. Specifically it provides the ability to: To parse a semantic version use the `NewVersion` function. For example, If there is an error the version wasn't parseable. The version object has methods to get the parts of the version, compare it to other versions, convert the version back into a string, and get the original string. For more details please see the documentation at https://godoc.org/github.com/Masterminds/semver. A set of versions can be sorted using the `sort` package from the standard library. For example, Checking a version against version constraints is one of the most featureful parts of the package. There are two elements to the comparisons. First, a comparison string is a list of comma separated and comparisons. These are then separated by || separated or comparisons. For example, `">= 1.2, < 3.0.0 || >= 4.2.3"` is looking for a comparison that's greater than or equal to 1.2 and less than 3.0.0 or is greater than or equal to 4.2.3. The basic comparisons are: There are multiple methods to handle ranges and the first is hyphens ranges. These look like: The `x`, `X`, and `*` characters can be used as a wildcard character. This works for all comparison operators. When used on the `=` operator it falls back to the pack level comparison (see tilde below). For example, Tilde Range Comparisons (Patch) The tilde (`~`) comparison operator is for patch level ranges when a minor version is specified and major level changes when the minor number is missing. For example, Caret Range Comparisons (Major) The caret (`^`) comparison operator is for major level changes. This is useful when comparisons of API versions as a major change is API breaking. For example,
Package scany is a set of packages for scanning data from a database into Go structs and more. scany isn't limited to any specific database. It integrates with database/sql, so any database with database/sql driver is supported. It also works with https://github.com/jackc/pgx native interface. Apart from the out of the box support, scany can be easily extended to work with almost any database library. scany contains the following packages: sqlscan package works with database/sql standard library. pgxscan package works with github.com/jackc/pgx library native interface. dbscan package works with an abstract database and can be integrated with any library that has a concept of rows. This particular package implements core scany features and contains all the logic. Both sqlscan and pgxscan use dbscan internally.
Package openssl is a light wrapper around OpenSSL for Go. It strives to provide a near-drop-in replacement for the Go standard library tls package, while allowing for: OpenSSL is battle-tested and optimized C. While Go's built-in library shows great promise, it is still young and in some places, inefficient. This simple OpenSSL wrapper can often do at least 2x with the same cipher and protocol. On my lappytop, I get the following benchmarking speeds: Many systems support OpenSSL with a variety of plugins and modules for things, such as hardware acceleration in embedded devices. OpenSSL allows for far greater configuration of corner cases and backwards compatibility (such as support of SSLv2). You shouldn't be using SSLv2 if you can help but, but sometimes you can't help it. Yeah yeah, Heartbleed. But according to the author of the standard library's TLS implementation, Go's TLS library is vulnerable to timing attacks. And whether or not OpenSSL received the appropriate amount of scrutiny pre-Heartbleed, it sure is receiving it now. Starting an HTTP server that uses OpenSSL is very easy. It's as simple as: Getting a net.Listener that uses OpenSSL is also easy: Making a client connection is straightforward too: Help wanted: To get this library to work with net/http's client, we had to fork net/http. It would be nice if an alternate http client library supported the generality needed to use OpenSSL instead of crypto/tls.
Package chaincfg defines chain configuration parameters. In addition to the main Decred network, which is intended for the transfer of monetary value, there also exists two currently active standard networks: regression test and testnet (version 0). These networks are incompatible with each other (each sharing a different genesis block) and software should handle errors where input intended for one network is used on an application instance running on a different network. For library packages, chaincfg provides the ability to lookup chain parameters and encoding magics when passed a *Params. Older APIs not updated to the new convention of passing a *Params may lookup the parameters for a wire.DecredNet using ParamsForNet, but be aware that this usage is deprecated and will be removed from chaincfg in the future. For main packages, a (typically global) var may be assigned the address of one of the standard Param vars for use as the application's "active" network. When a network parameter is needed, it may then be looked up through this variable (either directly, or hidden in a library call). If an application does not use one of the standard Decred networks, a new Params struct may be created which defines the parameters for the non-standard network. As a general rule of thumb, all network parameters should be unique to the network, but parameter collisions can still occur.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decred. Errors returned by this package are of type txscript.ErrorKind wrapped by txscript.Error which has full support for the standard library errors.Is and errors.As functions. This allows the caller to programmatically determine the specific error while still providing rich error messages with contextual information. See the constants defined with ErrorKind in the package documentation for a full list.
Package log is a modification of the log package included in the Go standard library, adding support for leveled logging, colorized output and JSON formatting. A predefined 'standard' logger which is accessible through helper functions Error[f|ln], Info[f|ln], Debug[f|ln], and Trace[f|ln]. The standard logger writes to standard error and prints the filename and linenumber of the call site of each logged message. Every log message is output on a separate line: if the message being printed does not end in a newline, the logger will add one.
Package chaincfg defines chain configuration parameters. In addition to the main Decred network, which is intended for the transfer of monetary value, there also exists two currently active standard networks: regression test and testnet (version 0). These networks are incompatible with each other (each sharing a different genesis block) and software should handle errors where input intended for one network is used on an application instance running on a different network. For main packages, a (typically global) var may be assigned the address of one of the standard Param vars for use as the application's "active" network. When a network parameter is needed, it may then be looked up through this variable (either directly, or hidden in a library call). If an application does not use one of the standard Decred networks, a new Params struct may be created which defines the parameters for the non-standard network. As a general rule of thumb, all network parameters should be unique to the network, but parameter collisions can still occur.
Package reflections provides high level abstractions over the Go standard reflect library. In practice, the `reflect` library's API proves somewhat low-level and un-intuitive. Using it can turn out pretty complex, daunting, and scary, when doing simple things like accessing a structure field value, a field tag, etc. The `reflections` package aims to make developers' life easier when it comes to introspect struct values at runtime. Its API takes inspiration in the python language's `getattr,` `setattr,` and `hasattr` set of methods and provides simplified access to structure fields and tags.
Package scrape provides a client for interacting with GitHub using screen scraping. This is intended to be used as a supplement to the standard go-github library to access data that is not currently exposed by either the official REST or GraphQL APIs. Because of the nature of screen scraping, this package should be treated as HIGHLY EXPERIMENTAL, and potentially unstable. We make no promises relating to compatibility or stability of the exported API. Even though this package is distributed as part of the go-github library, it is explicitly exempt from any stability promises that my be implied by the library version number.
Package graph contains generic implementations of basic graph algorithms. The algorithms in this library can be applied to any graph data structure implementing the two Iterator methods: Order, which returns the number of vertices, and Visit, which iterates over the neighbors of a vertex. All algorithms operate on directed graphs with a fixed number of vertices, labeled from 0 to n-1, and edges with integer cost. An undirected edge {v, w} of cost c is represented by the two directed edges (v, w) and (w, v), both of cost c. A self-loop, an edge connecting a vertex to itself, is both directed and undirected. The type Mutable represents a directed graph with a fixed number of vertices and weighted edges that can be added or removed. The implementation uses hash maps to associate each vertex in the graph with its adjacent vertices. This gives constant time performance for all basic operations. The type Immutable is a compact representation of an immutable graph. The implementation uses lists to associate each vertex in the graph with its adjacent vertices. This makes for fast and predictable iteration: the Visit method produces its elements by reading from a fixed sorted precomputed list. This type supports multigraphs. The subpackage graph/build offers a tool for building virtual graphs. In a virtual graph no vertices or edges are stored in memory, they are instead computed as needed. New virtual graphs are constructed by composing and filtering a set of standard graphs, or by writing functions that describe the edges of a graph. The Basics example shows how to build a plain graph and how to efficiently use the Visit iterator, the key abstraction of this package. The DFS example contains a full implementation of depth-first search. Build a plain graph and visit all of its edges. Show how to use this package by implementing a complete depth-first search.
Package swag contains a bunch of helper functions for go-openapi and go-swagger projects. You may also use it standalone for your projects. This repo has only few dependencies outside of the standard library:
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decred. The errors returned by this package are of type txscript.ErrorKind wrapped by txscript.Error which has full support for the standard library errors.Is and errors.As functions. This allows the caller to programmatically determine the specific error while still providing rich error messages with contextual information. See the constants defined with ErrorKind in the package documentation for a full list.
ZCrypto is a research and data collection cryptography library, designed to be used for measuring and analyzing cryptographic deployments on the Internet. It is largely centered around the WebPKI. ZCrypto contains forks of the Golang X.509 and TLS libraries that speak old TLS versions, deprecated ciphers. ZCrypto provides more lenient and open access to X.509 certificates and TLS handshake state than its standard library counterparts. ZCrypto also contains a custom X.509 chain builder, designed for bulk chain building across large sets of certificates.
esc embeds files into go programs and provides http.FileSystem interfaces to them. It adds all named files or files recursively under named directories at the path specified. The output file provides an http.FileSystem interface with zero dependencies on packages outside the standard library. Usage: The flags are: After producing an output file, the assets may be accessed with the FS() function, which takes a flag to use local assets instead (for local development). FS(Must)?(Byte|String) returns an asset as a (byte slice|string). FSMust(Byte|String) panics if the asset is not found. esc can be invoked by go generate: Embedded assets can be served with HTTP using the http.FileServer. Assuming you have a directory structure similar to the following: Where main.go contains:
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Provides an HTTP Transport that implements the `RoundTripper` interface and can be used as a built in replacement for the standard library's, providing: This is a thin wrapper around `http.Transport` that sets dial timeouts and uses Go's internal timer scheduler to call the Go 1.1+ `CancelRequest()` API.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. If you currently use the $GOPATH scheme, install the package with the following command. To test the installation, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package sqlstruct provides some convenience functions for using structs with the Go standard library's database/sql package. The package matches struct field names to SQL query column names. A field can also specify a matching column with "sql" tag, if it's different from field name. Unexported fields or fields marked with `sql:"-"` are ignored, just like with "encoding/json" package. For example: Aliased tables in a SQL statement may be scanned into a specific structure identified by the same alias, using the ColumnsAliased and ScanAliased functions:
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package gooxml provides creation, reading, and writing of ECMA 376 Office Open XML documents, spreadsheets and presentations. It is still early in development, but is progressing quickly. This library takes a slightly different approach from others, in that it starts by trying to support all of the ECMA-376 standard when marshaling/unmarshaling XML documents. From there it adds wrappers around the ECMA-376 derived types that provide a more convenient interface. The raw XML based types reside in the `schema/“ directory. These types are always accessible from the wrapper types via a `X() method that returns the raw type. Except for the base documents (document.Document, spreadsheet.Workbook and presentation.Presentation), the other wrapper types are value types with non-pointer methods. They exist solely to modify and return data from one or more XML types. The packages of interest are baliance.com/gooxml/document, baliance/gooxml/spreadsheet and baliance.com/gooxml/presentation.
Package libdns defines core interfaces that should be implemented by packages that interact with DNS provider clients. These interfaces are small and idiomatic Go interfaces with well-defined semantics for the purposes of reading and manipulating DNS records using DNS provider APIs. This documentation uses the definitions for terms from RFC 9499: https://datatracker.ietf.org/doc/html/rfc9499 This package represents records with the Record interface, which is any type that can transform itself into the RR struct. This interface is implemented by the various record abstractions this package offers: RR structs, where the data is serialized as a single opaque string as if in a zone file, being a type-agnostic Resource Record (that is, a name, type, class, TTL, and data); and individual RR-type structures, where the data is parsed into its separate fields for easier manipulation by Go programs (for example: SRV, TXT, and ServiceBinding types). This hybrid design grants great flexibility for both DNS provider packages and consumer Go programs. Record values should not be primitvely compared (==) unless they are RR, because other struct types contain maps, for which equality is not defined; additionally, some packages may attach custom data to each RR struct-type's `ProviderData` field, whose values might not be comparable either. The `ProviderData` field is not portable across providers, or possibly even zones. Because it is not portable, and we want to ensure that RR structs remain both portable and comparable, the `RR()` method does not preserve `ProviderData` in its return value. Users of libdns packages should check the documentation of provider packages, as some may use the `ProviderData` field to reduce API calls / increase effiency. But implementations must never rely on `ProviderData` for correctness if possible (and should document clearly otherwise). Implementations of the libdns interfaces should accept as input any Record value, and should return as output the concrete struct types that implement the Record interface (i.e. Address, TXT, ServiceBinding, etc). This is important to ensure the provider libraries are robust and also predictable: callers can reliably type-switch on the output to immediately access structured data about each record without the possibility of errors. Returned values should be of types defined by this package to make type-assertions reliable. Records are described independently of any particular zone, a convention that grants records portability across zones. As such, record names are partially qualified, i.e. relative to the zone. For example, a record called “sub” in zone “example.com.” represents a fully-qualified domain name (FQDN) of “sub.example.com.”. Implementations should expect that input records conform to this standard, while also ensuring that output records do; adjustments to record names may need to be made before or after provider API calls, for example, to maintain consistency with all other libdns packages. Helper functions are available in this package to convert between relative and absolute names; see RelativeName and AbsoluteName. Although zone names are a required input, libdns does not coerce any particular representation of DNS zones; only records. Since zone name and records are separate inputs in libdns interfaces, it is up to the caller to maintain the pairing between a zone's name and its records. All interface implementations must be safe for concurrent/parallel use, meaning 1) no data races, and 2) simultaneous method calls must result in either both their expected outcomes or an error. For example, if libdns.RecordAppender.AppendRecords is called simultaneously, and two API requests are made to the provider at the same time, the result of both requests must be visible after they both complete; if the provider does not synchronize the writing of the zone file and one request overwrites the other, then the client implementation must take care to synchronize on behalf of the incompetent provider. This synchronization need not be global; for example: the scope of synchronization might only need to be within the same zone, allowing multiple requests at once as long as all of them are for different zone. (Exact logic depends on the provider.) Some service providers APIs may enforce rate limits or have sporadic errors. It is generally expected that libdns provider packages implement basic retry logic (e.g. retry up to 3-5 times with backoff in the event of a connection error or some HTTP error that may be recoverable, including 5xx or 429s) when it is safe to do so. Retrying/recovering from errors should not add substantial latency, though. If it will take longer than a couple seconds, best to return an error.
Package shortid enables the generation of short, unique, non-sequential and by default URL friendly Ids. The package is heavily inspired by the node.js https://github.com/dylang/shortid library. The standard Id length is 9 symbols when generated at a rate of 1 Id per millisecond, occasionally it reaches 11 (at the rate of a few thousand Ids per millisecond) and very-very rarely it can go beyond that during continuous generation at full throttle on high-performant hardware. A test generating 500k Ids at full throttle on conventional hardware generated the following Ids at the head and the tail (length > 9 is expected for this test): The package guarantees the generation of unique Ids with zero collisions for 34 years (1/1/2016-1/1/2050) using the same worker Id within a single (although concurrent) application if application restarts take longer than 1 millisecond. The package supports up to 32 works, all providing unique sequences. Although heavily inspired by the node.js shortid library this is not a simple Go port. In addition it The algorithm uses less randomness than the original node.js implementation, which permits to extend the life span as well as reduce and guarantee the length. In general terms, each Id has the following 3 pieces of information encoded: the millisecond (first 8 symbols), the worker Id (9th symbol), running concurrent counter within the same millisecond, only if required, over all remaining symbols. The element of randomness per symbol is 1/2 for the worker and the millisecond and 0 for the counter. Here 0 means no randomness, i.e. every value is encoded using a 64-base alphabet; 1/2 means one of two matching symbols of the supplied alphabet, 1/4 one of four matching symbols. The original algorithm of the node.js module uses 1/4 throughout. All methods accepting the parameters that govern the randomness are exported and can be used to directly implement an algorithm with e.g. more randomness, but with longer Ids and shorter life spans.
Package grip provides a flexible logging package for basic Go programs. Drawing inspiration from Go and Python's standard library logging, as well as systemd's journal service, and other logging systems, Grip provides a number of very powerful logging abstractions in one high-level package. The central type of the grip package is the Journaler type, instances of which provide distinct log capturing system. For ease, following from the Go standard library, the grip package provides parallel public methods that use an internal "standard" Jouernaler instance in the grip package, which has some defaults configured and may be sufficient for many use cases. The send.Sender interface provides a way of changing the logging backend, and the send package provides a number of alternate implementations of logging systems, including: systemd's journal, logging to standard output, logging to a file, and generic syslog support. The message.Composer interface is the representation of all messages. They are implemented to provide a raw structured form as well as a string representation for more conentional logging output. Furthermore they are intended to be easy to produce, and defer more expensive processing until they're being logged, to prevent expensive operations producing messages that are below threshold. Loging helpers exist for the following levels: These methods accept both strings (message content,) or types that implement the message.MessageComposer interface. Composer types make it possible to delay generating a message unless the logger is over the logging threshold. Use this to avoid expensive serialization operations for suppressed logging operations. All levels also have additional methods with `ln` and `f` appended to the end of the method name which allow Println() and Printf() style functionality. You must pass printf/println-style arguments to these methods. The Conditional logging methods take two arguments, a Boolean, and a message argument. Messages can be strings, objects that implement the MessageComposer interface, or errors. If condition boolean is true, the threshold level is met, and the message to log is not an empty string, then it logs the resolved message. Use conditional logging methods to potentially suppress log messages based on situations orthogonal to log level, with "log sometimes" or "log rarely" semantics. Combine with MessageComposers to to avoid expensive message building operations. The MutiCatcher type makes it possible to collect from a group of operations and then aggregate them as a single error.
Package blockchain implements Decred block handling and chain selection rules. The Decred block handling and chain selection rules are an integral, and quite likely the most important, part of Decred. At its core, Decred is a distributed consensus of which blocks are valid and which ones will comprise the main block chain (public ledger) that ultimately determines accepted transactions, so it is extremely important that fully validating nodes agree on all rules. At a high level, this package provides support for inserting new blocks into the block chain according to the aforementioned rules. It includes functionality such as rejecting duplicate blocks, ensuring blocks and transactions follow all rules, and best chain selection along with reorganization. Since this package does not deal with other Decred specifics such as network communication or wallets, it provides a notification system which gives the caller a high level of flexibility in how they want to react to certain events such as newly connected main chain blocks which might result in wallet updates. Before a block is allowed into the block chain, it must go through an intensive series of validation rules. The following list serves as a general outline of those rules to provide some intuition into what is going on under the hood, but is by no means exhaustive: This package supports headers-first semantics such that block data can be processed out of order so long as the associated header is already known. The headers themselves, however, must be processed in the correct order since headers that do not properly connect are rejected. In other words, orphan headers are not allowed. The processing code always maintains the best chain as the branch tip that has the most cumulative proof of work, so it is important to keep that in mind when considering errors returned from processing blocks. Notably, due to the ability to process blocks out of order, and the fact blocks can only be fully validated once all of their ancestors have the block data available, it is to be expected that no error is returned immediately for blocks that are valid enough to make it to the point they require the remaining ancestor block data to be fully validated even though they might ultimately end up failing validation. Similarly, because the data for a block becoming available makes any of its direct descendants that already have their data available eligible for validation, an error being returned does not necessarily mean the block being processed is the one that failed validation. Errors returned by this package have full support for the standard library errors.Is and errors.As methods and are either the raw errors provided by underlying calls or of type blockchain.RuleError, possibly wrapped in a blockchain.MultiError. This allows the caller to differentiate between unexpected errors, such as database errors, versus errors due to rule violations through errors.As. In addition, callers can programmatically determine the specific rule violation by making use of errors.Is with any of the wrapped error kinds.
[godoc-link-here] This package provides an alternative to the standard library log package. The actual logging functions never return errors. If you are logging something, you really don't want to be worried about the logging having trouble. Modules have names that are defined by dotted strings. There is a root module that has the name `""`. Each module (except the root module) has a parent, identified by the part of the name without the last dotted value. * the parent of "first.second.third" is "first.second" * the parent of "first.second" is "first" * the parent of "first" is "" (the root module) Each module can specify its own severity level. Logging calls that are of a lower severity than the module's effective severity level are not written out. Loggers are created through their Context. There is a default global context that is used if you just want simple use. Contexts are used where you may want different sets of writers for different loggers. Most use cases are fine with just using the default global context. Loggers are created using the GetLogger function. The default global context has one writer registered, which will write to Stderr, and the root module, which will only emit warnings and above. If you want to continue using the default logger, but have it emit all logging levels you need to do the following. To make loggo produce colored output, you can do the following, having imported github.com/juju/loggo/loggocolor:
Package gopher provides an implementation of the Gopher protocol (RFC 1436) Much of the API is similar in design to the net/http package of the standard library. To build custom Gopher servers implement handler functions or the `Handler{}` interface. Implementing a client is as simple as calling `gopher.Get(uri)` and passing in a `uri` such as `"gopher://gopher.floodgap.com/"`.
Package scany is a set of packages for scanning data from a database into Go structs and more. scany isn't limited to any specific database. It integrates with database/sql, so any database with database/sql driver is supported. It also works with https://github.com/jackc/pgx native interface. Apart from the out of the box support, scany can be easily extended to work with almost any database library. scany contains the following packages: sqlscan package works with database/sql standard library. pgxscan package works with github.com/jackc/pgx library native interface. dbscan package works with an abstract database and can be integrated with any library that has a concept of rows. This particular package implements core scany features and contains all the logic. Both sqlscan and pgxscan use dbscan internally.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package puddle is a generic resource pool. Puddle is a tiny generic resource pool library for Go that uses the standard context library to signal cancellation of acquires. It is designed to contain the minimum functionality a resource pool needs that cannot be implemented without concurrency concerns. For example, a database connection pool may use puddle internally and implement health checks and keep-alive behavior without needing to implement any concurrent code of its own.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. See https://github.com/go-kivik/kivik/wiki/Kivik-database-drivers for a list. The kivik driver system is modeled after the standard library's sql and sql/driver packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB.
Package errors provides error creation and matching for all wallet systems. It is imported as errors and takes over the roll of the standard library errors package.
Package b implements the B+tree flavor of a BTree. 2016-07-16: Update benchmark results to newer Go version. Add a note on concurrency. 2014-06-26: Lower GC presure by recycling things. 2014-04-18: Added new method Put. Tree.{Clear,Delete,Put,Set} mutate the tree. One can use eg. a sync.Mutex.Lock/Unlock (or sync.RWMutex.Lock/Unlock) to wrap those calls if they are to be invoked concurrently. Tree.{First,Get,Last,Len,Seek,SeekFirst,SekLast} read but do not mutate the tree. One can use eg. a sync.RWMutex.RLock/RUnlock to wrap those calls if they are to be invoked concurrently with any of the tree mutating methods. Enumerator.{Next,Prev} mutate the enumerator and read but not mutate the tree. One can use eg. a sync.RWMutex.RLock/RUnlock to wrap those calls if they are to be invoked concurrently with any of the tree mutating methods. A separate mutex for the enumerator, or the whole tree in a simplified variant, is necessary if the enumerator's Next/Prev methods per se are to be invoked concurrently. Keys and their associated values are interface{} typed, similar to all of the containers in the standard library. Semiautomatic production of a type specific variant of this package is supported via This command will write to stdout a version of the btree.go file where every key type occurrence is replaced by the word 'KEY' and every value type occurrence is replaced by the word 'VALUE'. Then you have to replace these tokens with your desired type(s), using any technique you're comfortable with. This is how, for example, 'example/int.go' was created: No other changes to int.go are necessary, it compiles just fine. Running the benchmarks for 1000 keys on a machine with Intel i5-4670 CPU @ 3.4GHz, Go 1.7rc1.
Package edwards25519 implements group logic for the twisted Edwards curve This is better known as the Edwards curve equivalent to Curve25519, and is the curve used by the Ed25519 signature scheme. Most users don't need this package, and should instead use crypto/ed25519 for signatures, golang.org/x/crypto/curve25519 for Diffie-Hellman, or github.com/gtank/ristretto255 for prime order group logic. However, developers who do need to interact with low-level edwards25519 operations can use this package, which is an extended version of crypto/internal/edwards25519 from the standard library repackaged as an importable module.
Package log implements a simple level logging package. It defines a type, Logger, with methods for formatting output at specific verbosity levels. It maintains API compatibility with the standard library "log" package by embedding a 'log.Logger' struct.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. The officially supported drivers are: The Filesystem and Memory drivers are also available, but in early stages of development, and so many features do not yet work: The kivik driver system is modeled after the standard library's `sql` and `sql/driver` packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB. couchDB stores JSON, so Kivik translates Go data structures to and from JSON as necessary. The conversion between Go data types and JSON, and vice versa, is handled automatically according to the rules and behavior described in the documentationf or the standard library's `encoding/json` package (https://golang.org/pkg/encoding/json). One would be well-advised to become familiar with using `json` struct field tags (https://golang.org/pkg/encoding/json/#Marshal) when working with JSON documents. Most Kivik methods take `context.Context` as their first argument. This allows the cancellation of blocking operations in the case that the result is no longer needed. A typical use case for a web application would be to cancel a Kivik request if the remote HTTP client ahs disconnected, rednering the results of the query irrelevant. To learn more about Go's contexts, read the `context` package documentation (https://golang.org/pkg/context/) and read the Go blog post "Go Concurrency Patterns: Context" (https://blog.golang.org/context) for example code. If in doubt, you can pass `context.TODO()` as the context variable. Example: Kivik returns errors that embed an HTTP status code. In most cases, this is the HTTP status code returned by the server. The embedded HTTP status code may be accessed easily using the StatusCode() method, or with a type assertion to `interface { StatusCode() int }`. Example: Any error that does not conform to this interface will be assumed to represent a http.StatusInternalServerError status code. For common usage, authentication should be as simple as including the authentication credentials in the connection DSN. For example: This will connect to `localhost` on port 5984, using the username `admin` and the password `abc123`. When connecting to CouchDB (as in the above example), this will use cookie auth (https://docs.couchdb.org/en/stable/api/server/authn.html?highlight=cookie%20auth#cookie-authentication). Depending on which driver you use, there may be other ways to authenticate, as well. At the moment, the CouchDB driver is the only official driver which offers additional authentication methods. Please refer to the CouchDB package documentation for details (https://pkg.go.dev/github.com/go-kivik/couchdb/v3). With a client handle in hand, you can create a database handle with the DB() method to interact with a specific database.
Package puddle is a generic resource pool with type-parametrized api. Puddle is a tiny generic resource pool library for Go that uses the standard context library to signal cancellation of acquires. It is designed to contain the minimum functionality a resource pool needs that cannot be implemented without concurrency concerns. For example, a database connection pool may use puddle internally and implement health checks and keep-alive behavior without needing to implement any concurrent code of its own.
Package nacl is a pure Go implementation of the NaCL cryptography library. Compared with the implementation in golang.org/x/crypto/nacl, this library offers all of the API's present in NaCL, as well as some utilities for generating and loading keys and nonces, and encrypting messages. NaCl's goal is to provide all of the core operations needed to build higher-level cryptographic tools, as well as to demonstrate how to implement these tools in Go. Compared with the equivalent packages in the Go standard library and x/crypto package, we replace some function calls with their equivalents in this package, and make more use of return values (versus writing to a byte array specified at stdin). Most functions should be compatible with their C/C++ counterparts in the library here: https://nacl.cr.yp.to/. In many cases the tests are ported directly to this library.
Package telnet provides TELNET and TELNETS client and server implementations in a style similar to the "net/http" library that is part of the Go standard library, including support for "middleware"; TELNETS is secure TELNET, with the TELNET protocol over a secured TLS (or SSL) connection. ListenAndServe starts a (un-secure) TELNET server with a given address and handler. ListenAndServeTLS starts a (secure) TELNETS server with a given address and handler, using the specified "cert.pem" and "key.pem" files. Example TELNET Client: DialToAndCall creates a (un-secure) TELNET client, which connects to a given address using the specified caller. Example TELNETS Client: DialToAndCallTLS creates a (secure) TELNETS client, which connects to a given address using the specified caller. If you are communicating over the open Internet, you should be using (the secure) TELNETS protocol and ListenAndServeTLS. If you are communicating just on localhost, then using just (the un-secure) TELNET protocol and telnet.ListenAndServe may be OK. If you are not sure which to use, use TELNETS and ListenAndServeTLS. The previous 2 exaple servers were very very simple. Specifically, they just echoed back whatever you submitted to it. If you typed: ... it would send back: (Exactly the same data you sent it.) A more useful TELNET server can be made using the "github.com/reiver/go-telnet/telsh" sub-package. The `telsh` sub-package provides "middleware" that enables you to create a "shell" interface (also called a "command line interface" or "CLI") which most people would expect when using TELNET OR TELNETS. For example: Note that in the example, so far, we have registered 2 commands: `date` and `animate`. For this to actually work, we need to have code for the `date` and `animate` commands. The actual implemenation for the `date` command could be done like the following: Note that your "real" work is in the `dateHandlerFunc` func. And the actual implementation for the `animate` command could be done as follows: Again, note that your "real" work is in the `animateHandlerFunc` func. If you are using the telnet.ListenAndServeTLS func or the telnet.Server.ListenAndServeTLS method, you will need to supply "cert.pem" and "key.pem" files. If you do not already have these files, the Go soure code contains a tool for generating these files for you. It can be found at: So, for example, if your `$GOROOT` is the "/usr/local/go" directory, then it would be at: If you run the command: ... then you get the help information for "generate_cert.go". Of course, you would replace or set `$GOROOT` with whatever your path actually is. Again, for example, if your `$GOROOT` is the "/usr/local/go" directory, then it would be: To demonstrate the usage of "generate_cert.go", you might run the following to generate certificates that were bound to the hosts `127.0.0.1` and `localhost`: If you are not sure where "generate_cert.go" is on your computer, on Linux and Unix based systems, you might be able to find the file with the command: (If it finds it, it should output the full path to this file.) You can make a simple (un-secure) TELNET client with code like the following: You can make a simple (secure) TELNETS client with code like the following: The TELNET protocol is best known for providing a means of connecting to a remote computer, using a (text-based) shell interface, and being able to interact with it, (more or less) as if you were sitting at that computer. (Shells are also known as command-line interfaces or CLIs.) Although this was the original usage of the TELNET protocol, it can be (and is) used for other purposes as well. The TELNET protocol came from an era in computing when text-based shell interface where the common way of interacting with computers. The common interface for computers during this era was a keyboard and a monochromatic (i.e., single color) text-based monitors called "video terminals". (The word "video" in that era of computing did not refer to things such as movies. But instead was meant to contrast it with paper. In particular, the teletype machines, which were typewriter like devices that had a keyboard, but instead of having a monitor had paper that was printed onto.) In that era, in the early days of office computers, it was rare that an individual would have a computer at their desk. (A single computer was much too expensive.) Instead, there would be a single central computer that everyone would share. The style of computer used (for the single central shared computer) was called a "mainframe". What individuals would have at their desks, instead of their own compuer, would be some type of video terminal. The different types of video terminals had named such as: • VT52 • VT100 • VT220 • VT240 ("VT" in those named was short for "video terminal".) To understand this era, we need to go back a bit in time to what came before it: teletypes. Terminal codes (also sometimes called 'terminal control codes') are used to issue various kinds of commands to the terminal. (Note that 'terminal control codes' are a completely separate concept for 'TELNET commands', and the two should NOT be conflated or confused.) The most common types of 'terminal codes' are the 'ANSI escape codes'. (Although there are other types too.) ANSI escape codes (also sometimes called 'ANSI escape sequences') are a common type of 'terminal code' used to do things such as: • moving the cursor, • erasing the display, • erasing the line, • setting the graphics mode, • setting the foregroup color, • setting the background color, • setting the screen resolution, and • setting keyboard strings. One of the abilities of ANSI escape codes is to set the foreground color. Here is a table showing codes for this: (Note that in the `[]byte` that the first `byte` is the number `27` (which is the "escape" character) where the third and fouth characters are the **not** number literals, but instead character literals `'3'` and whatever.) Another of the abilities of ANSI escape codes is to set the background color. (Note that in the `[]byte` that the first `byte` is the number `27` (which is the "escape" character) where the third and fouth characters are the **not** number literals, but instead character literals `'4'` and whatever.) In Go code, if I wanted to use an ANSI escape code to use a blue background, a white foreground, and bold, I could do that with the ANSI escape code: Note that that start with byte value 27, which we have encoded as hexadecimal as \x1b. Followed by the '[' character. Coming after that is the sub-string "44", which is the code that sets our background color to blue. We follow that with the ';' character (which separates codes). And the after that comes the sub-string "37", which is the code that set our foreground color to white. After that, we follow with another ";" character (which, again, separates codes). And then we follow it the sub-string "1", which is the code that makes things bold. And finally, the ANSI escape sequence is finished off with the 'm' character. To show this in a more complete example, our `dateHandlerFunc` from before could incorporate ANSI escape sequences as follows: Note that in that example, in addition to using the ANSI escape sequence "\x1b[44;37;1m" to set the background color to blue, set the foreground color to white, and make it bold, we also used the ANSI escape sequence "\x1b[0m" to reset the background and foreground colors and boldness back to "normal".
Package jose aims to provide an implementation of the Javascript Object Signing and Encryption set of standards. It implements encryption and signing based on the JSON Web Encryption and JSON Web Signature standards, with optional JSON Web Token support available in a sub-package. The library supports both the compact and JWS/JWE JSON Serialization formats, and has optional support for multiple recipients.