Package dns implements a full featured interface to the Domain Name System. Both server- and client-side programming is supported. The package allows complete control over what is sent out to the DNS. The API follows the less-is-more principle, by presenting a small, clean interface. It supports (asynchronous) querying/replying, incoming/outgoing zone transfers, TSIG, EDNS0, dynamic updates, notifies and DNSSEC validation/signing. Note that domain names MUST be fully qualified before sending them, unqualified names in a message will result in a packing failure. Resource records are native types. They are not stored in wire format. Basic usage pattern for creating a new resource record: Or directly from a string: Or when the default origin (.) and TTL (3600) and class (IN) suit you: Or even: In the DNS messages are exchanged, these messages contain resource records (sets). Use pattern for creating a message: Or when not certain if the domain name is fully qualified: The message m is now a message with the question section set to ask the MX records for the miek.nl. zone. The following is slightly more verbose, but more flexible: After creating a message it can be sent. Basic use pattern for synchronous querying the DNS at a server configured on 127.0.0.1 and port 53: Suppressing multiple outstanding queries (with the same question, type and class) is as easy as setting: More advanced options are available using a net.Dialer and the corresponding API. For example it is possible to set a timeout, or to specify a source IP address and port to use for the connection: If these "advanced" features are not needed, a simple UDP query can be sent, with: When this functions returns you will get DNS message. A DNS message consists out of four sections. The question section: in.Question, the answer section: in.Answer, the authority section: in.Ns and the additional section: in.Extra. Each of these sections (except the Question section) contain a []RR. Basic use pattern for accessing the rdata of a TXT RR as the first RR in the Answer section: Both domain names and TXT character strings are converted to presentation form both when unpacked and when converted to strings. For TXT character strings, tabs, carriage returns and line feeds will be converted to \t, \r and \n respectively. Back slashes and quotations marks will be escaped. Bytes below 32 and above 127 will be converted to \DDD form. For domain names, in addition to the above rules brackets, periods, spaces, semicolons and the at symbol are escaped. DNSSEC (DNS Security Extension) adds a layer of security to the DNS. It uses public key cryptography to sign resource records. The public keys are stored in DNSKEY records and the signatures in RRSIG records. Requesting DNSSEC information for a zone is done by adding the DO (DNSSEC OK) bit to a request. Signature generation, signature verification and key generation are all supported. Dynamic updates reuses the DNS message format, but renames three of the sections. Question is Zone, Answer is Prerequisite, Authority is Update, only the Additional is not renamed. See RFC 2136 for the gory details. You can set a rather complex set of rules for the existence of absence of certain resource records or names in a zone to specify if resource records should be added or removed. The table from RFC 2136 supplemented with the Go DNS function shows which functions exist to specify the prerequisites. The prerequisite section can also be left empty. If you have decided on the prerequisites you can tell what RRs should be added or deleted. The next table shows the options you have and what functions to call. An TSIG or transaction signature adds a HMAC TSIG record to each message sent. The supported algorithms include: HmacSHA1, HmacSHA256 and HmacSHA512. Basic use pattern when querying with a TSIG name "axfr." (note that these key names must be fully qualified - as they are domain names) and the base64 secret "so6ZGir4GPAqINNh9U5c3A==": If an incoming message contains a TSIG record it MUST be the last record in the additional section (RFC2845 3.2). This means that you should make the call to SetTsig last, right before executing the query. If you make any changes to the RRset after calling SetTsig() the signature will be incorrect. When requesting an zone transfer (almost all TSIG usage is when requesting zone transfers), with TSIG, this is the basic use pattern. In this example we request an AXFR for miek.nl. with TSIG key named "axfr." and secret "so6ZGir4GPAqINNh9U5c3A==" and using the server 176.58.119.54: You can now read the records from the transfer as they come in. Each envelope is checked with TSIG. If something is not correct an error is returned. A custom TSIG implementation can be used. This requires additional code to perform any session establishment and signature generation/verification. The client must be configured with an implementation of the TsigProvider interface: Basic use pattern validating and replying to a message that has TSIG set. RFC 6895 sets aside a range of type codes for private use. This range is 65,280 - 65,534 (0xFF00 - 0xFFFE). When experimenting with new Resource Records these can be used, before requesting an official type code from IANA. See https://miek.nl/2014/september/21/idn-and-private-rr-in-go-dns/ for more information. EDNS0 is an extension mechanism for the DNS defined in RFC 2671 and updated by RFC 6891. It defines a new RR type, the OPT RR, which is then completely abused. Basic use pattern for creating an (empty) OPT RR: The rdata of an OPT RR consists out of a slice of EDNS0 (RFC 6891) interfaces. Currently only a few have been standardized: EDNS0_NSID (RFC 5001) and EDNS0_SUBNET (RFC 7871). Note that these options may be combined in an OPT RR. Basic use pattern for a server to check if (and which) options are set: SIG(0) From RFC 2931: It works like TSIG, except that SIG(0) uses public key cryptography, instead of the shared secret approach in TSIG. Supported algorithms: ECDSAP256SHA256, ECDSAP384SHA384, RSASHA1, RSASHA256 and RSASHA512. Signing subsequent messages in multi-message sessions is not implemented.
Package advancedtls provides gRPC transport credentials that allow easy configuration of advanced TLS features. The APIs here give the user more customizable control to fit their security landscape, thus the "advanced" moniker. This package provides both interfaces and generally useful implementations of those interfaces, for example periodic credential reloading, support for certificate revocation lists, and customizable certificate verification behaviors. If the provided implementations do not fit a given use case, a custom implementation of the interface can be injected.
Package cap provides all the Linux Capabilities userspace library API bindings in native Go. Capabilities are a feature of the Linux kernel that allow fine grain permissions to perform privileged operations. Privileged operations are required to do irregular system level operations from code. You can read more about how Capabilities are intended to work here: This package supports native Go bindings for all the features described in that paper as well as supporting subsequent changes to the kernel for other styles of inheritable Capability. Some simple things you can do with this package are: The "cap" package operates with POSIX semantics for security state. That is all OS threads are kept in sync at all times. The package "kernel.org/pub/linux/libs/security/libcap/psx" is used to implement POSIX semantics system calls that manipulate thread state uniformly over the whole Go (and any CGo linked) process runtime. Note, if the Go runtime syscall interface contains the Linux variant syscall.AllThreadsSyscall() API (it debuted in go1.16 see https://github.com/golang/go/issues/1435 for its history) then the "libcap/psx" package will use that to invoke Capability setting system calls in pure Go binaries. With such an enhanced Go runtime, to force this behavior, use the CGO_ENABLED=0 environment variable. POSIX semantics are more secure than trying to manage privilege at a thread level when those threads share a common memory image as they do under Linux: it is trivial to exploit a vulnerability in one thread of a process to cause execution on any another thread. So, any imbalance in security state, in such cases will readily create an opportunity for a privilege escalation vulnerability. POSIX semantics also work well with Go, which deliberately tries to insulate the user from worrying about the number of OS threads that are actually running in their program. Indeed, Go can efficiently launch and manage tens of thousands of concurrent goroutines without bogging the program or wider system down. It does this by aggressively migrating idle threads to make progress on unblocked goroutines. So, inconsistent security state across OS threads can also lead to program misbehavior. The only exception to this process-wide common security state is the cap.Launcher related functionality. This briefly locks an OS thread to a goroutine in order to launch another executable - the robust implementation of this kind of support is quite subtle, so please read its documentation carefully, if you find that you need it. See https://sites.google.com/site/fullycapable/ for recent updates, some more complete walk-through examples of ways of using 'cap.Set's etc and information on how to file bugs. Copyright (c) 2019-21 Andrew G. Morgan <morgan@kernel.org> The cap and psx packages are licensed with a (you choose) BSD 3-clause or GPL2. See LICENSE file for details.
Package age implements file encryption according to the age-encryption.org/v1 specification. For most use cases, use the Encrypt and Decrypt functions with X25519Recipient and X25519Identity. If passphrase encryption is required, use ScryptRecipient and ScryptIdentity. For compatibility with existing SSH keys use the filippo.io/age/agessh package. age encrypted files are binary and not malleable. For encoding them as text, use the filippo.io/age/armor package. age does not have a global keyring. Instead, since age keys are small, textual, and cheap, you are encouraged to generate dedicated keys for each task and application. Recipient public keys can be passed around as command line flags and in config files, while secret keys should be stored in dedicated files, through secret management systems, or as environment variables. There is no default path for age keys. Instead, they should be stored at application-specific paths. The CLI supports files where private keys are listed one per line, ignoring empty lines and lines starting with "#". These files can be parsed with ParseIdentities. When integrating age into a new system, it's recommended that you only support X25519 keys, and not SSH keys. The latter are supported for manual encryption operations. If you need to tie into existing key management infrastructure, you might want to consider implementing your own Recipient and Identity. Files encrypted with a stable version (not alpha, beta, or release candidate) of age, or with any v1.0.0 beta or release candidate, will decrypt with any later versions of the v1 API. This might change in v2, in which case v1 will be maintained with security fixes for compatibility with older files. If decrypting an older file poses a security risk, doing so might require an explicit opt-in in the API.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package sts provides the API client, operations, and parameter types for AWS Security Token Service. Security Token Service (STS) enables you to request temporary, limited-privilege credentials for users. This guide provides descriptions of the STS API. For more information about using this service, see Temporary Security Credentials.
Package ecr provides the API client, operations, and parameter types for Amazon Elastic Container Registry. Amazon Elastic Container Registry (Amazon ECR) is a managed container image registry service. Customers can use the familiar Docker CLI, or their preferred client, to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry for your Docker or Open Container Initiative (OCI) images. Amazon ECR supports private repositories with resource-based permissions using IAM so that specific users or Amazon EC2 instances can access repositories and images. Amazon ECR has service endpoints in each supported Region. For more information, see Amazon ECR endpointsin the Amazon Web Services General Reference.
Package ecrpublic provides the API client, operations, and parameter types for Amazon Elastic Container Registry Public. Amazon Elastic Container Registry Public (Amazon ECR Public) is a managed container image registry service. Amazon ECR provides both public and private registries to host your container images. You can use the Docker CLI or your preferred client to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry for your Docker or Open Container Initiative (OCI) images. Amazon ECR supports public repositories with this API. For information about the Amazon ECR API for private repositories, see Amazon Elastic Container Registry API Reference.
Package kms provides the API client, operations, and parameter types for AWS Key Management Service. Key Management Service (KMS) is an encryption and key management web service. This guide describes the KMS operations that you can call programmatically. For general information about KMS, see the Key Management Service Developer Guide. KMS has replaced the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term. Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, macOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to KMS and other Amazon Web Services services. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services. We recommend that you use the Amazon Web Services SDKs to make programmatic API calls to KMS. If you need to use FIPS 140-2 validated cryptographic modules when communicating with Amazon Web Services, use the FIPS endpoint in your preferred Amazon Web Services Region. For more information about the available FIPS endpoints, see Service endpointsin the Key Management Service topic of the Amazon Web Services General Reference. All KMS API calls must be signed and be transmitted using Transport Layer Security (TLS). KMS recommends you always use the latest supported TLS version. Clients must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Requests must be signed using an access key ID and a secret access key. We strongly recommend that you do not use your Amazon Web Services account root access key ID and secret access key for everyday work. You can use the access key ID and secret access key for an IAM user or you can use the Security Token Service (STS) to generate temporary security credentials and use those to sign requests. All KMS requests must be signed with Signature Version 4. KMS supports CloudTrail, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the CloudTrail User Guide. For more information about credentials and request signing, see the following: Amazon Web Services Security Credentials Temporary Security Credentials Signature Version 4 Signing Process Of the API operations discussed in this guide, the following will prove the most useful for most applications. You will likely perform operations other than these, such as creating keys and assigning policies, by using the console.
Package ssm provides the API client, operations, and parameter types for Amazon Simple Systems Manager (SSM). Amazon Web Services Systems Manager is the operations hub for your Amazon Web Services applications and resources and a secure end-to-end management solution for hybrid cloud environments that enables safe and secure operations at scale. This reference is intended to be used with the Amazon Web Services Systems Manager User Guide. To get started, see Setting up Amazon Web Services Systems Manager. Related resources For information about each of the capabilities that comprise Systems Manager, see Systems Manager capabilitiesin the Amazon Web Services Systems Manager User Guide. For details about predefined runbooks for Automation, a capability of Amazon Web Services Systems Manager, see the Systems Manager Automation runbook reference. For information about AppConfig, a capability of Systems Manager, see the AppConfig User Guide and the AppConfig API Reference. For information about Incident Manager, a capability of Systems Manager, see the Systems Manager Incident Manager User Guideand the Systems Manager Incident Manager API Reference.
Package iam provides the API client, operations, and parameter types for AWS Identity and Access Management. Identity and Access Management (IAM) is a web service for securely controlling access to Amazon Web Services services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which Amazon Web Services resources users and applications can access. For more information about IAM, see Identity and Access Management (IAM)and the Identity and Access Management User Guide.
Package apigateway provides the API client, operations, and parameter types for Amazon API Gateway. Amazon API Gateway helps developers deliver robust, secure, and scalable mobile and web application back ends. API Gateway allows developers to securely connect mobile and web applications to APIs that run on Lambda, Amazon EC2, or other publicly addressable web services that are hosted outside of AWS.
Package securityhub provides the API client, operations, and parameter types for AWS SecurityHub. Security Hub provides you with a comprehensive view of your security state in Amazon Web Services and helps you assess your Amazon Web Services environment against security industry standards and best practices. Security Hub collects security data across Amazon Web Services accounts, Amazon Web Services services, and supported third-party products and helps you analyze your security trends and identify the highest priority security issues. To help you manage the security state of your organization, Security Hub supports multiple security standards. These include the Amazon Web Services Foundational Security Best Practices (FSBP) standard developed by Amazon Web Services, and external compliance frameworks such as the Center for Internet Security (CIS), the Payment Card Industry Data Security Standard (PCI DSS), and the National Institute of Standards and Technology (NIST). Each standard includes several security controls, each of which represents a security best practice. Security Hub runs checks against security controls and generates control findings to help you assess your compliance against security best practices. In addition to generating control findings, Security Hub also receives findings from other Amazon Web Services services, such as Amazon GuardDuty and Amazon Inspector, and supported third-party products. This gives you a single pane of glass into a variety of security-related issues. You can also send Security Hub findings to other Amazon Web Services services and supported third-party products. Security Hub offers automation features that help you triage and remediate security issues. For example, you can use automation rules to automatically update critical findings when a security check fails. You can also leverage the integration with Amazon EventBridge to trigger automatic responses to specific findings. This guide, the Security Hub API Reference, provides information about the Security Hub API. This includes supported resources, HTTP methods, parameters, and schemas. If you're new to Security Hub, you might find it helpful to also review the Security Hub User Guide. The user guide explains key concepts and provides procedures that demonstrate how to use Security Hub features. It also provides information about topics such as integrating Security Hub with other Amazon Web Services services. In addition to interacting with Security Hub by making calls to the Security Hub API, you can use a current version of an Amazon Web Services command line tool or SDK. Amazon Web Services provides tools and SDKs that consist of libraries and sample code for various languages and platforms, such as PowerShell, Java, Go, Python, C++, and .NET. These tools and SDKs provide convenient, programmatic access to Security Hub and other Amazon Web Services services . They also handle tasks such as signing requests, managing errors, and retrying requests automatically. For information about installing and using the Amazon Web Services tools and SDKs, see Tools to Build on Amazon Web Services. With the exception of operations that are related to central configuration, Security Hub API requests are executed only in the Amazon Web Services Region that is currently active or in the specific Amazon Web Services Region that you specify in your request. Any configuration or settings change that results from the operation is applied only to that Region. To make the same change in other Regions, call the same API operation in each Region in which you want to apply the change. When you use central configuration, API requests for enabling Security Hub, standards, and controls are executed in the home Region and all linked Regions. For a list of central configuration operations, see the Central configuration terms and conceptssection of the Security Hub User Guide. The following throttling limits apply to Security Hub API operations. BatchEnableStandards - RateLimit of 1 request per second. BurstLimit of 1 request per second. GetFindings - RateLimit of 3 requests per second. BurstLimit of 6 requests per second. BatchImportFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. BatchUpdateFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. UpdateStandardsControl - RateLimit of 1 request per second. BurstLimit of 5 requests per second. All other operations - RateLimit of 10 requests per second. BurstLimit of 30 requests per second.
Package memguard implements a secure software enclave for the storage of sensitive information in memory. There are two main container objects exposed in this API. Enclave objects encrypt data and store the ciphertext whereas LockedBuffers are more like guarded memory allocations. There is a limit on the maximum number of LockedBuffer objects that can exist at any one time, imposed by the system's mlock limits. There is no limit on Enclaves. The general workflow is to store sensitive information in Enclaves when it is not immediately needed and decrypt it when and where it is. After use, the LockedBuffer should be destroyed. If you need access to the data inside a LockedBuffer in a type not covered by any methods provided by this API, you can type-cast the allocation's memory to whatever type you want. This is of course an unsafe operation and so care must be taken to ensure that the cast is valid and does not result in memory unsafety. Further examples of code and interesting use-cases can be found in the examples subpackage. Several functions exist to make the mass purging of data very easy. It is recommended to make use of them when appropriate. Core dumps are disabled by default. If you absolutely require them, you can enable them by using unix.Setrlimit to set RLIMIT_CORE to an appropriate value.
Package csrf (gorilla/csrf) provides Cross Site Request Forgery (CSRF) prevention middleware for Go web applications & services. It includes: * The `csrf.Protect` middleware/handler provides CSRF protection on routes attached to a router or a sub-router. * A `csrf.Token` function that provides the token to pass into your response, whether that be a HTML form or a JSON response body. * ... and a `csrf.TemplateField` helper that you can pass into your `html/template` templates to replace a `{{ .csrfField }}` template tag with a hidden input field. gorilla/csrf is easy to use: add the middleware to individual handlers with the below: ... and then collect the token with `csrf.Token(r)` before passing it to the template, JSON body or HTTP header (you pick!). gorilla/csrf inspects the form body (first) and HTTP headers (second) on subsequent POST/PUT/PATCH/DELETE/etc. requests for the token. Note that the authentication key passed to `csrf.Protect([]byte(key))` should be 32-bytes long and persist across application restarts. Generating a random key won't allow you to authenticate existing cookies and will break your CSRF validation. Here's the common use-case: HTML forms you want to provide CSRF protection for, in order to protect malicious POST requests being made: Note that the CSRF middleware will (by necessity) consume the request body if the token is passed via POST form values. If you need to consume this in your handler, insert your own middleware earlier in the chain to capture the request body. You can also send the CSRF token in the response header. This approach is useful if you're using a front-end JavaScript framework like Ember or Angular, or are providing a JSON API: If you're writing a client that's supposed to mimic browser behavior, make sure to send back the CSRF cookie (the default name is _gorilla_csrf, but this can be changed with the CookieName Option) along with either the X-CSRF-Token header or the gorilla.csrf.Token form field. In addition: getting CSRF protection right is important, so here's some background: * This library generates unique-per-request (masked) tokens as a mitigation against the BREACH attack (http://breachattack.com/). * The 'base' (unmasked) token is stored in the session, which means that multiple browser tabs won't cause a user problems as their per-request token is compared with the base token. * Operates on a "whitelist only" approach where safe (non-mutating) HTTP methods (GET, HEAD, OPTIONS, TRACE) are the *only* methods where token validation is not enforced. * The design is based on the battle-tested Django (https://docs.djangoproject.com/en/1.8/ref/csrf/) and Ruby on Rails (http://api.rubyonrails.org/classes/ActionController/RequestForgeryProtection.html) approaches. * Cookies are authenticated and based on the securecookie (https://github.com/gorilla/securecookie) library. They're also Secure (issued over HTTPS only) and are HttpOnly by default, because sane defaults are important. * Go's `crypto/rand` library is used to generate the 32 byte (256 bit) tokens and the one-time-pad used for masking them. This library does not seek to be adventurous.
Package neptune provides the API client, operations, and parameter types for Amazon Neptune. Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Amazon Neptune supports popular graph models Property Graph and W3C's RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. This interface reference for Amazon Neptune contains documentation for a programming or command line interface you can use to manage Amazon Neptune. Note that Amazon Neptune is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide.
Package configservice provides the API client, operations, and parameter types for AWS Config. Config provides a way to keep track of the configurations of all the Amazon Web Services resources associated with your Amazon Web Services account. You can use Config to get the current and historical configurations of each Amazon Web Services resource and also to get information about the relationship between the resources. An Amazon Web Services resource can be an Amazon Compute Cloud (Amazon EC2) instance, an Elastic Block Store (EBS) volume, an elastic network Interface (ENI), or a security group. For a complete list of resources currently supported by Config, see Supported Amazon Web Services resources. You can access and manage Config through the Amazon Web Services Management Console, the Amazon Web Services Command Line Interface (Amazon Web Services CLI), the Config API, or the Amazon Web Services SDKs for Config. This reference guide contains documentation for the Config API and the Amazon Web Services CLI commands that you can use to manage Config. The Config API uses the Signature Version 4 protocol for signing requests. For more information about how to sign a request with this protocol, see Signature Version 4 Signing Process. For detailed information about Config features and their associated actions or commands, as well as how to work with Amazon Web Services Management Console, see What Is Configin the Config Developer Guide.
Package appconfig provides the API client, operations, and parameter types for Amazon AppConfig. AppConfig feature flags and dynamic configurations help software builders quickly and securely adjust application behavior in production environments without full code deployments. AppConfig speeds up software release frequency, improves application resiliency, and helps you address emergent issues more quickly. With feature flags, you can gradually release new capabilities to users and measure the impact of those changes before fully deploying the new capabilities to all users. With operational flags and dynamic configurations, you can update block lists, allow lists, throttling limits, logging verbosity, and perform other operational tuning to quickly respond to issues in production environments. AppConfig is a capability of Amazon Web Services Systems Manager. Despite the fact that application configuration content can vary greatly from application to application, AppConfig supports the following use cases, which cover a broad spectrum of customer needs: Feature flags and toggles - Safely release new capabilities to your customers in a controlled environment. Instantly roll back changes if you experience a problem. Application tuning - Carefully introduce application changes while testing the impact of those changes with users in production environments. Allow list or block list - Control access to premium features or instantly block specific users without deploying new code. Centralized configuration storage - Keep your configuration data organized and consistent across all of your workloads. You can use AppConfig to deploy configuration data stored in the AppConfig hosted configuration store, Secrets Manager, Systems Manager, Parameter Store, or Amazon S3. This section provides a high-level description of how AppConfig works and how you get started. 1. Identify configuration values in code you want to manage in the cloud Before you start creating AppConfig artifacts, we recommend you identify configuration data in your code that you want to dynamically manage using AppConfig. Good examples include feature flags or toggles, allow and block lists, logging verbosity, service limits, and throttling rules, to name a few. If your configuration data already exists in the cloud, you can take advantage of AppConfig validation, deployment, and extension features to further streamline configuration data management. 2. Create an application namespace To create a namespace, you create an AppConfig artifact called an application. An application is simply an organizational construct like a folder. 3. Create environments For each AppConfig application, you define one or more environments. An environment is a logical grouping of targets, such as applications in a Beta or Production environment, Lambda functions, or containers. You can also define environments for application subcomponents, such as the Web , Mobile , and Back-end . You can configure Amazon CloudWatch alarms for each environment. The system monitors alarms during a configuration deployment. If an alarm is triggered, the system rolls back the configuration. 4. Create a configuration profile A configuration profile includes, among other things, a URI that enables AppConfig to locate your configuration data in its stored location and a profile type. AppConfig supports two configuration profile types: feature flags and freeform configurations. Feature flag configuration profiles store their data in the AppConfig hosted configuration store and the URI is simply hosted . For freeform configuration profiles, you can store your data in the AppConfig hosted configuration store or any Amazon Web Services service that integrates with AppConfig, as described in Creating a free form configuration profilein the the AppConfig User Guide. A configuration profile can also include optional validators to ensure your configuration data is syntactically and semantically correct. AppConfig performs a check using the validators when you start a deployment. If any errors are detected, the deployment rolls back to the previous configuration data. 5. Deploy configuration data When you create a new deployment, you specify the following: An application ID A configuration profile ID A configuration version An environment ID where you want to deploy the configuration data A deployment strategy ID that defines how fast you want the changes to take effect When you call the StartDeployment API action, AppConfig performs the following tasks: Retrieves the configuration data from the underlying data store by using the location URI in the configuration profile. Verifies the configuration data is syntactically and semantically correct by using the validators you specified when you created your configuration profile. Caches a copy of the data so it is ready to be retrieved by your application. This cached copy is called the deployed data. 6. Retrieve the configuration You can configure AppConfig Agent as a local host and have the agent poll AppConfig for configuration updates. The agent calls the StartConfigurationSessionand GetLatestConfiguration API actions and caches your configuration data locally. To retrieve the data, your application makes an HTTP call to the localhost server. AppConfig Agent supports several use cases, as described in Simplified retrieval methodsin the the AppConfig User Guide. If AppConfig Agent isn't supported for your use case, you can configure your application to poll AppConfig for configuration updates by directly calling the StartConfigurationSession and GetLatestConfigurationAPI actions. This reference is intended to be used with the AppConfig User Guide.
Package eventbridge provides the API client, operations, and parameter types for Amazon EventBridge. Amazon EventBridge helps you to respond to state changes in your Amazon Web Services resources. When your resources change state, they automatically send events to an event stream. You can create rules that match selected events in the stream and route them to targets to take action. You can also use rules to take action on a predetermined schedule. For example, you can configure rules to: Automatically invoke an Lambda function to update DNS entries when an event notifies you that Amazon EC2 instance enters the running state. Direct specific API records from CloudTrail to an Amazon Kinesis data stream for detailed analysis of potential security or availability risks. Periodically invoke a built-in target to create a snapshot of an Amazon EBS volume. For more information about the features of Amazon EventBridge, see the Amazon EventBridge User Guide.
Package graphql-go-tools is library to create GraphQL services using the go programming language. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Source: https://graphql.org This library is intended to be a set of low level building blocks to write high performance and secure GraphQL applications. Use cases could range from writing layer seven GraphQL proxies, firewalls, caches etc.. You would usually not use this library to write a GraphQL server yourself but to build tools for the GraphQL ecosystem. To achieve this goal the library has zero dependencies at its core functionality. It has a full implementation of the GraphQL AST and supports lexing, parsing, validation, normalization, introspection, query planning as well as query execution etc. With the execution package it's possible to write a fully functional GraphQL server that is capable to mediate between various protocols and formats. In it's current state you can use the following DataSources to resolve fields: - Static data (embed static data into a schema to extend a field in a simple way) - HTTP JSON APIs (combine multiple Restful APIs into one single GraphQL Endpoint, nesting is possible) - GraphQL APIs (you can combine multiple GraphQL APIs into one single GraphQL Endpoint, nesting is possible) - Webassembly/WASM Lambdas (e.g. resolve a field using a Rust lambda) If you're looking for a ready to use solution that has all this functionality packaged as a Gateway have a look at: https://github.com/jensneuse/graphql-gateway Created by Jens Neuse
Package guardduty provides the API client, operations, and parameter types for Amazon GuardDuty. Amazon GuardDuty is a continuous security monitoring service that analyzes and processes the following foundational data sources - VPC flow logs, Amazon Web Services CloudTrail management event logs, CloudTrail S3 data event logs, EKS audit logs, DNS logs, Amazon EBS volume data, runtime activity belonging to container workloads, such as Amazon EKS, Amazon ECS (including Amazon Web Services Fargate), and Amazon EC2 instances. It uses threat intelligence feeds, such as lists of malicious IPs and domains, and machine learning to identify unexpected, potentially unauthorized, and malicious activity within your Amazon Web Services environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IPs, domains, or presence of malware on your Amazon EC2 instances and container workloads. For example, GuardDuty can detect compromised EC2 instances and container workloads serving malware, or mining bitcoin. GuardDuty also monitors Amazon Web Services account access behavior for signs of compromise, such as unauthorized infrastructure deployments like EC2 instances deployed in a Region that has never been used, or unusual API calls like a password policy change to reduce password strength. GuardDuty informs you about the status of your Amazon Web Services environment by producing security findings that you can view in the GuardDuty console or through Amazon EventBridge. For more information, see the Amazon GuardDuty User Guide.
Package rolesanywhere provides the API client, operations, and parameter types for IAM Roles Anywhere. Identity and Access Management Roles Anywhere provides a secure way for your workloads such as servers, containers, and applications that run outside of Amazon Web Services to obtain temporary Amazon Web Services credentials. Your workloads can use the same IAM policies and roles you have for native Amazon Web Services applications to access Amazon Web Services resources. Using IAM Roles Anywhere eliminates the need to manage long-term credentials for workloads running outside of Amazon Web Services. To use IAM Roles Anywhere, your workloads must use X.509 certificates issued by their certificate authority (CA). You register the CA with IAM Roles Anywhere as a trust anchor to establish trust between your public key infrastructure (PKI) and IAM Roles Anywhere. If you don't manage your own PKI system, you can use Private Certificate Authority to create a CA and then use that to establish trust with IAM Roles Anywhere. This guide describes the IAM Roles Anywhere operations that you can call programmatically. For more information about IAM Roles Anywhere, see the IAM Roles Anywhere User Guide.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This library is offered as-is, and without a guarantee. It will need an independent security review before it should be considered ready for use in security-critical applications. If you integrate Kyber into your application it is YOUR RESPONSIBILITY to arrange for that audit. If you notice a possible security problem, please report it to dedis-security@epfl.ch.
Package inspector2 provides the API client, operations, and parameter types for Inspector2. Amazon Inspector is a vulnerability discovery service that automates continuous scanning for security vulnerabilities within your Amazon EC2, Amazon ECR, and Amazon Web Services Lambda environments.
Package iot provides the API client, operations, and parameter types for AWS IoT. IoT provides secure, bi-directional communication between Internet-connected devices (such as sensors, actuators, embedded devices, or smart appliances) and the Amazon Web Services cloud. You can discover your custom IoT-Data endpoint to communicate with, configure rules for data processing and integration with other services, organize resources associated with each device (Registry), configure logging, and create and manage policies and credentials to authenticate devices. The service endpoints that expose this API are listed in Amazon Web Services IoT Core Endpoints and Quotas. You must use the endpoint for the region that has the resources you want to access. The service name used by Amazon Web Services Signature Version 4 to sign the request is: execute-api. For more information about how IoT works, see the Developer Guide. For information about how to use the credentials provider for IoT, see Authorizing Direct Calls to Amazon Web Services Services.
Package cognitoidentity provides the API client, operations, and parameter types for Amazon Cognito Identity. Amazon Cognito Federated Identities is a web service that delivers scoped temporary credentials to mobile devices and other untrusted environments. It uniquely identifies a device and supplies the user with a consistent identity over the lifetime of an application. Using Amazon Cognito Federated Identities, you can enable authentication with one or more third-party identity providers (Facebook, Google, or Login with Amazon) or an Amazon Cognito user pool, and you can also choose to support unauthenticated access from your app. Cognito delivers a unique identifier for each user and acts as an OpenID token provider trusted by AWS Security Token Service (STS) to access temporary, limited-privilege AWS credentials. For a description of the authentication flow from the Amazon Cognito Developer Guide see Authentication Flow. For more information see Amazon Cognito Federated Identities.
Package cloudwatchevents provides the API client, operations, and parameter types for Amazon CloudWatch Events. Amazon EventBridge helps you to respond to state changes in your Amazon Web Services resources. When your resources change state, they automatically send events to an event stream. You can create rules that match selected events in the stream and route them to targets to take action. You can also use rules to take action on a predetermined schedule. For example, you can configure rules to: Automatically invoke an Lambda function to update DNS entries when an event notifies you that Amazon EC2 instance enters the running state. Direct specific API records from CloudTrail to an Amazon Kinesis data stream for detailed analysis of potential security or availability risks. Periodically invoke a built-in target to create a snapshot of an Amazon EBS volume. For more information about the features of Amazon EventBridge, see the Amazon EventBridge User Guide.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). If you want to cache your MFA logins, use AuthTypeUsernamePasswordMFA authenticator. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. disableOCSPChecks: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. insecureMode: deprecated. Use disableOCSPChecks instead. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. # Connection Config You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTERATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: GOSNOWFLAKE_SKIP_REGISTERATION should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables SNOWFLAKE_HOME(connections.toml file directory) SNOWFLAKE_DEFAULT_CONNECTION_NAME(DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. If you want to define S3 client logging, override S3LoggingMode variable using configuration: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode Example: A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithMapValuesNullable and WithArrayValuesNullable contexts to handle null values in, respectively, maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. When working with the arrow columnar format in go driver, ArrowBatch structs are used. These are structs mostly corresponding to data chunks received from the backend. They allow for access to specific arrow.Record structs. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `WithArrowBatches` context, similar to the following: This returns []*ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -WithArrowBatchesTimestampOption -WithHigherPrecision -WithArrowBatchesUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context WithArrowBatchesUtf8Validation, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. WHen using NUMBERs with non zero scale, the value is returned as an integer type and a scale is provided in record metadata. Example. When we have a 123.45 value that comes from NUMBER(9, 4), it will be represented as 1234500 with scale equal to 4. It is a client responsibility to interpret it correctly. Also - see limitations section above. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression.
Package glacier provides the API client, operations, and parameter types for Amazon Glacier. Glacier is an extremely low-cost storage service that provides secure, durable, and easy-to-use storage for data backup and archival. With Glacier, customers can store their data cost effectively for months, years, or decades. Glacier also enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations. Glacier is a great storage choice when low storage cost is paramount and your data is rarely retrieved. If your application requires fast or frequent access to your data, consider using Amazon S3. For more information, see Amazon Simple Storage Service (Amazon S3). You can store any kind of data in any format. There is no maximum limit on the total amount of data you can store in Glacier. If you are a first-time user of Glacier, we recommend that you begin by reading the following sections in the Amazon S3 Glacier Developer Guide: What is Amazon S3 Glacier Getting Started with Amazon S3 Glacier
Package inspector provides the API client, operations, and parameter types for Amazon Inspector. Amazon Inspector enables you to analyze the behavior of your AWS resources and to identify potential security issues. For more information, see Amazon Inspector User Guide.
Package ssoadmin provides the API client, operations, and parameter types for AWS Single Sign-On Admin. IAM Identity Center (successor to Single Sign-On) helps you securely create, or connect, your workforce identities and manage their access centrally across Amazon Web Services accounts and applications. IAM Identity Center is the recommended approach for workforce authentication and authorization in Amazon Web Services, for organizations of any size and type. IAM Identity Center uses the sso and identitystore API namespaces. This reference guide provides information on single sign-on operations which could be used for access management of Amazon Web Services accounts. For information about IAM Identity Center features, see the IAM Identity Center User Guide. Many operations in the IAM Identity Center APIs rely on identifiers for users and groups, known as principals. For more information about how to work with principals and principal IDs in IAM Identity Center, see the Identity Store API Reference. Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, iOS, Android, and more). The SDKs provide a convenient way to create programmatic access to IAM Identity Center and other Amazon Web Services services. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services.
Package ec2instanceconnect provides the API client, operations, and parameter types for AWS EC2 Instance Connect. This is the Amazon EC2 Instance Connect API Reference. It provides descriptions, syntax, and usage examples for each of the actions for Amazon EC2 Instance Connect. Amazon EC2 Instance Connect enables system administrators to publish one-time use SSH public keys to EC2, providing users a simple and secure way to connect to their instances. To view the Amazon EC2 Instance Connect content in the Amazon EC2 User Guide, see Connect to your Linux instance using EC2 Instance Connect. For Amazon EC2 APIs, see the Amazon EC2 API Reference.
Package ram provides the API client, operations, and parameter types for AWS Resource Access Manager. This is the Resource Access Manager API Reference. This documentation provides descriptions and syntax for each of the actions and data types in RAM. RAM is a service that helps you securely share your Amazon Web Services resources to other Amazon Web Services accounts. If you use Organizations to manage your accounts, then you can share your resources with your entire organization or to organizational units (OUs). For supported resource types, you can also share resources with individual Identity and Access Management (IAM) roles and users. To learn more about RAM, see the following resources: Resource Access Manager product page Resource Access Manager User Guide
Package transfer provides the API client, operations, and parameter types for AWS Transfer Family. Transfer Family is a fully managed service that enables the transfer of files over the File Transfer Protocol (FTP), File Transfer Protocol over SSL (FTPS), or Secure Shell (SSH) File Transfer Protocol (SFTP) directly into and out of Amazon Simple Storage Service (Amazon S3) or Amazon EFS. Additionally, you can use Applicability Statement 2 (AS2) to transfer files into and out of Amazon S3. Amazon Web Services helps you seamlessly migrate your file transfer workflows to Transfer Family by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3, you can use it with Amazon Web Services services for processing, analytics, machine learning, and archiving. Getting started with Transfer Family is easy since there is no infrastructure to buy and set up.
Package sio implements the DARE format. It provides an API for secure en/decrypting IO operations using io.Reader and io.Writer.
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package firecracker provides a library to interact with the Firecracker API. Firecracker is an open-source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services. See https://firecracker-microvm.github.io/ for more details. This library requires Go 1.11 and can be used with Go modules. BUG(aws): There are some Firecracker features that are not yet supported by the SDK. These are tracked as GitHub issues with the firecracker-feature label: https://github.com/firecracker-microvm/firecracker-go-sdk/issues?q=is%3Aissue+is%3Aopen+label%3Afirecracker-feature This library is licensed under the Apache 2.0 License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
This is a GSSAPI provider for Go, which expects to be initialized with the name of a dynamically loadable module which can be dlopen'd to get at a C language binding GSSAPI library. The GSSAPI concepts are explained in RFC 2743, "Generic Security Service Application Program Interface Version 2, Update 1". The API calls for C, together with a number of values for constants, come from RFC 2744, "Generic Security Service API Version 2 : C-bindings". Note that the basic GSSAPI bindings for C use the Latin-1 character set. UTF-8 interfaces are specified in RFC 5178, "Generic Security Service Application Program Interface (GSS-API) Internationalization and Domain-Based Service Names and Name Type", in 2008. Looking in 2013, this API does not appear to be provided by either MIT or Heimdal. This API applies solely to hostnames though, which can also be supplied in ACE encoding, bypassing the issue. For now, we assume that hostnames and usercodes are all ASCII-ish and pass UTF-8 into the library. Patches for more comprehensive support welcome.
Package apprunner provides the API client, operations, and parameter types for AWS App Runner. App Runner is an application service that provides a fast, simple, and cost-effective way to go directly from an existing container image or source code to a running service in the Amazon Web Services Cloud in seconds. You don't need to learn new technologies, decide which compute service to use, or understand how to provision and configure Amazon Web Services resources. App Runner connects directly to your container registry or source code repository. It provides an automatic delivery pipeline with fully managed operations, high performance, scalability, and security. For more information about App Runner, see the App Runner Developer Guide. For release information, see the App Runner Release Notes. To install the Software Development Kits (SDKs), Integrated Development Environment (IDE) Toolkits, and command line tools that you can use to access the API, see Tools for Amazon Web Services. For a list of Region-specific endpoints that App Runner supports, see App Runner endpoints and quotas in the Amazon Web Services General Reference.
Package health provides the API client, operations, and parameter types for AWS Health APIs and Notifications. The Health API provides access to the Health information that appears in the Health Dashboard. You can use the API operations to get information about events that might affect your Amazon Web Services services and resources. You must have a Business, Enterprise On-Ramp, or Enterprise Support plan from Amazon Web Services Support to use the Health API. If you call the Health API from an Amazon Web Services account that doesn't have a Business, Enterprise On-Ramp, or Enterprise Support plan, you receive a SubscriptionRequiredException error. For API access, you need an access key ID and a secret access key. Use temporary credentials instead of long-term access keys when possible. Temporary credentials include an access key ID, a secret access key, and a security token that indicates when the credentials expire. For more information, see Best practices for managing Amazon Web Services access keysin the Amazon Web Services General Reference. You can use the Health endpoint health.us-east-1.amazonaws.com (HTTPS) to call the Health API operations. Health supports a multi-Region application architecture and has two regional endpoints in an active-passive configuration. You can use the high availability endpoint example to determine which Amazon Web Services Region is active, so that you can get the latest information from the API. For more information, see Accessing the Health APIin the Health User Guide. For authentication of requests, Health uses the Signature Version 4 Signing Process. If your Amazon Web Services account is part of Organizations, you can use the Health organizational view feature. This feature provides a centralized view of Health events across all accounts in your organization. You can aggregate Health events in real time to identify accounts in your organization that are affected by an operational event or get notified of security vulnerabilities. Use the organizational view API operations to enable this feature and return event information. For more information, see Aggregating Health eventsin the Health User Guide. When you use the Health API operations to return Health events, see the following recommendations: Use the eventScopeCodeparameter to specify whether to return Health events that are public or account-specific. Use pagination to view all events from the response. For example, if you call the DescribeEventsForOrganization operation to get all events in your organization, you might receive several page results. Specify the nextToken in the next request to return more results.
Package gamelift provides the API client, operations, and parameter types for Amazon GameLift. Amazon GameLift provides solutions for hosting session-based multiplayer game servers in the cloud, including tools for deploying, operating, and scaling game servers. Built on Amazon Web Services global computing infrastructure, GameLift helps you deliver high-performance, high-reliability, low-cost game servers while dynamically scaling your resource usage to meet player demand. Get more information on these Amazon GameLift solutions in the Amazon GameLift Developer Guide. Amazon GameLift managed hosting -- Amazon GameLift offers a fully managed service to set up and maintain computing machines for hosting, manage game session and player session life cycle, and handle security, storage, and performance tracking. You can use automatic scaling tools to balance player demand and hosting costs, configure your game session management to minimize player latency, and add FlexMatch for matchmaking. Managed hosting with Realtime Servers -- With Amazon GameLift Realtime Servers, you can quickly configure and set up ready-to-go game servers for your game. Realtime Servers provides a game server framework with core Amazon GameLift infrastructure already built in. Then use the full range of Amazon GameLift managed hosting features, including FlexMatch, for your game. Amazon GameLift FleetIQ -- Use Amazon GameLift FleetIQ as a standalone service while hosting your games using EC2 instances and Auto Scaling groups. Amazon GameLift FleetIQ provides optimizations for game hosting, including boosting the viability of low-cost Spot Instances gaming. For a complete solution, pair the Amazon GameLift FleetIQ and FlexMatch standalone services. Amazon GameLift FlexMatch -- Add matchmaking to your game hosting solution. FlexMatch is a customizable matchmaking service for multiplayer games. Use FlexMatch as integrated with Amazon GameLift managed hosting or incorporate FlexMatch as a standalone service into your own hosting solution. This reference guide describes the low-level service API for Amazon GameLift. With each topic in this guide, you can find links to language-specific SDK guides and the Amazon Web Services CLI reference. Useful links: Amazon GameLift API operations listed by tasks Amazon GameLift tools and resources
Package mailyak provides a simple interface for generating MIME compliant emails, and optionally sending them over SMTP. Both plain-text and HTML email body content is supported, and their types implement io.Writer allowing easy composition directly from templating engines, etc. Attachments are fully supported including inline attachments, with anything that implements io.Reader suitable as a source (like files on disk, in-memory buffers, etc). The raw MIME content can be retrieved using MimeBuf(), typically used with an API service such as Amazon SES that does not require using an SMTP interface. MailYak supports both plain-text SMTP (which is automatically upgraded to a secure connection with STARTTLS if supported by the SMTP server) and explicit TLS connections.
Package datasync provides the API client, operations, and parameter types for AWS DataSync. DataSync is an online data movement and discovery service that simplifies data migration and helps you quickly, easily, and securely transfer your file or object data to, from, and between Amazon Web Services storage services. This API interface reference includes documentation for using DataSync programmatically. For complete information, see the DataSync User Guide.
Package ivschat provides the API client, operations, and parameter types for Amazon Interactive Video Service Chat. The Amazon IVS Chat control-plane API enables you to create and manage Amazon IVS Chat resources. You also need to integrate with the Amazon IVS Chat Messaging API, to enable users to interact with chat rooms in real time. The API is an AWS regional service. For a list of supported regions and Amazon IVS Chat HTTPS service endpoints, see the Amazon IVS Chat information on the Amazon IVS pagein the AWS General Reference. This document describes HTTP operations. There is a separate messaging API for managing Chat resources; see the Amazon IVS Chat Messaging API Reference. Notes on terminology: You create service applications using the Amazon IVS Chat API. We refer to these as applications. You create front-end client applications (browser and Android/iOS apps) using the Amazon IVS Chat Messaging API. We refer to these as clients. The following resources are part of Amazon IVS Chat: LoggingConfiguration — A configuration that allows customers to store and record sent messages in a chat room. See the Logging Configuration endpoints for more information. Room — The central Amazon IVS Chat resource through which clients connect to and exchange chat messages. See the Room endpoints for more information. A tag is a metadata label that you assign to an AWS resource. A tag comprises a key and a value, both set by you. For example, you might set a tag as topic:nature to label a particular video category. See Best practices and strategies in Tagging Amazon Web Services Resources and Tag Editor for details, including restrictions that apply to tags and "Tag naming limits and requirements"; Amazon IVS Chat has no service-specific constraints beyond what is documented there. Tags can help you identify and organize your AWS resources. For example, you can use the same tag for different resources to indicate that they are related. You can also use tags to manage access (see Access Tags). The Amazon IVS Chat API has these tag-related operations: TagResource, UntagResource, and ListTagsForResource. The following resource supports tagging: Room. At most 50 tags can be applied to a resource. Your Amazon IVS Chat applications (service applications and clients) must be authenticated and authorized to access Amazon IVS Chat resources. Note the differences between these concepts: Authentication is about verifying identity. Requests to the Amazon IVS Chat API must be signed to verify your identity. Authorization is about granting permissions. Your IAM roles need to have permissions for Amazon IVS Chat API requests. Users (viewers) connect to a room using secure access tokens that you create using the CreateChatTokenoperation through the AWS SDK. You call CreateChatToken for every user’s chat session, passing identity and authorization information about the user. HTTP API requests must be signed with an AWS SigV4 signature using your AWS security credentials. The AWS Command Line Interface (CLI) and the AWS SDKs take care of signing the underlying API calls for you. However, if your application calls the Amazon IVS Chat HTTP API directly, it’s your responsibility to sign the requests. You generate a signature using valid AWS credentials for an IAM role that has permission to perform the requested action. For example, DeleteMessage requests must be made using an IAM role that has the ivschat:DeleteMessage permission. For more information: Authentication and generating signatures — See Authenticating Requests (Amazon Web Services Signature Version 4)in the Amazon Web Services General Reference. Managing Amazon IVS permissions — See Identity and Access Managementon the Security page of the Amazon IVS User Guide. Amazon Resource Names (ARNs) ARNs uniquely identify AWS resources. An ARN is required when you need to specify a resource unambiguously across all of AWS, such as in IAM policies and API calls. For more information, see Amazon Resource Namesin the AWS General Reference.
Package imagebuilder provides the API client, operations, and parameter types for EC2 Image Builder. EC2 Image Builder is a fully managed Amazon Web Services service that makes it easier to automate the creation, management, and deployment of customized, secure, and up-to-date "golden" server images that are pre-installed and pre-configured with software and settings to meet specific IT standards.
Package storagegateway provides the API client, operations, and parameter types for AWS Storage Gateway. Amazon FSx File Gateway is no longer available to new customers. Existing customers of FSx File Gateway can continue to use the service normally. For capabilities similar to FSx File Gateway, visit this blog post. Storage Gateway is the service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization's on-premises IT environment and the Amazon Web Services storage infrastructure. The service enables you to securely upload data to the Amazon Web Services Cloud for cost effective backup and rapid disaster recovery. Use the following links to get started using the Storage Gateway Service API Reference: Storage Gateway required request headers Signing requests Error responses Operations in Storage Gateway Storage Gateway endpoints and quotas Storage Gateway resource IDs are in uppercase. When you use these resource IDs with the Amazon EC2 API, EC2 expects resource IDs in lowercase. You must change your resource ID to lowercase to use it with the EC2 API. For example, in Storage Gateway the ID for a volume might be vol-AA22BB012345DAF670 . When you use this ID with the EC2 API, you must change it to vol-aa22bb012345daf670 . Otherwise, the EC2 API might not behave as expected. IDs for Storage Gateway volumes and Amazon EBS snapshots created from gateway volumes are changing to a longer format. Starting in December 2016, all new volumes and snapshots will be created with a 17-character string. Starting in April 2016, you will be able to use these longer IDs so you can test your systems with the new format. For more information, see Longer EC2 and EBS resource IDs. For example, a volume Amazon Resource Name (ARN) with the longer volume ID format looks like the following: arn:aws:storagegateway:us-west-2:111122223333:gateway/sgw-12A3456B/volume/vol-1122AABBCCDDEEFFG . A snapshot ID with the longer ID format looks like the following: snap-78e226633445566ee . For more information, see Announcement: Heads-up – Longer Storage Gateway volume and snapshot IDs coming in 2016.
Package opensearchserverless provides the API client, operations, and parameter types for OpenSearch Service Serverless. Use the Amazon OpenSearch Serverless API to create, configure, and manage OpenSearch Serverless collections and security policies. OpenSearch Serverless is an on-demand, pre-provisioned serverless configuration for Amazon OpenSearch Service. OpenSearch Serverless removes the operational complexities of provisioning, configuring, and tuning your OpenSearch clusters. It enables you to easily search and analyze petabytes of data without having to worry about the underlying infrastructure and data management. To learn more about OpenSearch Serverless, see What is Amazon OpenSearch Serverless?
Package worklink provides the API client, operations, and parameter types for Amazon WorkLink. Amazon WorkLink is a cloud-based service that provides secure access to internal websites and web apps from iOS and Android phones. In a single step, your users, such as employees, can access internal websites as efficiently as they access any other public website. They enter a URL in their web browser, or choose a link to an internal website in an email. Amazon WorkLink authenticates the user's access and securely renders authorized internal web content in a secure rendering service in the AWS cloud. Amazon WorkLink doesn't download or store any internal web content on mobile devices.
Package grafana provides the API client, operations, and parameter types for Amazon Managed Grafana. Amazon Managed Grafana is a fully managed and secure data visualization service that you can use to instantly query, correlate, and visualize operational metrics, logs, and traces from multiple sources. Amazon Managed Grafana makes it easy to deploy, operate, and scale Grafana, a widely deployed data visualization tool that is popular for its extensible data support. With Amazon Managed Grafana, you create logically isolated Grafana servers called workspaces. In a workspace, you can create Grafana dashboards and visualizations to analyze your metrics, logs, and traces without having to build, package, or deploy any hardware to run Grafana servers.
Package graphql-go-tools is library to create GraphQL services using the go programming language. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Source: https://graphql.org This library is intended to be a set of low level building blocks to write high performance and secure GraphQL applications. Use cases could range from writing layer seven GraphQL proxies, firewalls, caches etc.. You would usually not use this library to write a GraphQL server yourself but to build tools for the GraphQL ecosystem. To achieve this goal the library has zero dependencies at its core functionality. It has a full implementation of the GraphQL AST and supports lexing, parsing, validation, normalization, introspection, query planning as well as query execution etc. With the execution package it's possible to write a fully functional GraphQL server that is capable to mediate between various protocols and formats. In it's current state you can use the following DataSources to resolve fields: - Static data (embed static data into a schema to extend a field in a simple way) - HTTP JSON APIs (combine multiple Restful APIs into one single GraphQL Endpoint, nesting is possible) - GraphQL APIs (you can combine multiple GraphQL APIs into one single GraphQL Endpoint, nesting is possible) - Webassembly/WASM Lambdas (e.g. resolve a field using a Rust lambda) If you're looking for a ready to use solution that has all this functionality packaged as a Gateway have a look at: https://wundergraph.com Created by Jens Neuse