Package dns implements a full featured interface to the Domain Name System. Both server- and client-side programming is supported. The package allows complete control over what is sent out to the DNS. The API follows the less-is-more principle, by presenting a small, clean interface. It supports (asynchronous) querying/replying, incoming/outgoing zone transfers, TSIG, EDNS0, dynamic updates, notifies and DNSSEC validation/signing. Note that domain names MUST be fully qualified before sending them, unqualified names in a message will result in a packing failure. Resource records are native types. They are not stored in wire format. Basic usage pattern for creating a new resource record: Or directly from a string: Or when the default origin (.) and TTL (3600) and class (IN) suit you: Or even: In the DNS messages are exchanged, these messages contain resource records (sets). Use pattern for creating a message: Or when not certain if the domain name is fully qualified: The message m is now a message with the question section set to ask the MX records for the miek.nl. zone. The following is slightly more verbose, but more flexible: After creating a message it can be sent. Basic use pattern for synchronous querying the DNS at a server configured on 127.0.0.1 and port 53: Suppressing multiple outstanding queries (with the same question, type and class) is as easy as setting: More advanced options are available using a net.Dialer and the corresponding API. For example it is possible to set a timeout, or to specify a source IP address and port to use for the connection: If these "advanced" features are not needed, a simple UDP query can be sent, with: When this functions returns you will get DNS message. A DNS message consists out of four sections. The question section: in.Question, the answer section: in.Answer, the authority section: in.Ns and the additional section: in.Extra. Each of these sections (except the Question section) contain a []RR. Basic use pattern for accessing the rdata of a TXT RR as the first RR in the Answer section: Both domain names and TXT character strings are converted to presentation form both when unpacked and when converted to strings. For TXT character strings, tabs, carriage returns and line feeds will be converted to \t, \r and \n respectively. Back slashes and quotations marks will be escaped. Bytes below 32 and above 127 will be converted to \DDD form. For domain names, in addition to the above rules brackets, periods, spaces, semicolons and the at symbol are escaped. DNSSEC (DNS Security Extension) adds a layer of security to the DNS. It uses public key cryptography to sign resource records. The public keys are stored in DNSKEY records and the signatures in RRSIG records. Requesting DNSSEC information for a zone is done by adding the DO (DNSSEC OK) bit to a request. Signature generation, signature verification and key generation are all supported. Dynamic updates reuses the DNS message format, but renames three of the sections. Question is Zone, Answer is Prerequisite, Authority is Update, only the Additional is not renamed. See RFC 2136 for the gory details. You can set a rather complex set of rules for the existence of absence of certain resource records or names in a zone to specify if resource records should be added or removed. The table from RFC 2136 supplemented with the Go DNS function shows which functions exist to specify the prerequisites. The prerequisite section can also be left empty. If you have decided on the prerequisites you can tell what RRs should be added or deleted. The next table shows the options you have and what functions to call. An TSIG or transaction signature adds a HMAC TSIG record to each message sent. The supported algorithms include: HmacSHA1, HmacSHA256 and HmacSHA512. Basic use pattern when querying with a TSIG name "axfr." (note that these key names must be fully qualified - as they are domain names) and the base64 secret "so6ZGir4GPAqINNh9U5c3A==": If an incoming message contains a TSIG record it MUST be the last record in the additional section (RFC2845 3.2). This means that you should make the call to SetTsig last, right before executing the query. If you make any changes to the RRset after calling SetTsig() the signature will be incorrect. When requesting an zone transfer (almost all TSIG usage is when requesting zone transfers), with TSIG, this is the basic use pattern. In this example we request an AXFR for miek.nl. with TSIG key named "axfr." and secret "so6ZGir4GPAqINNh9U5c3A==" and using the server 176.58.119.54: You can now read the records from the transfer as they come in. Each envelope is checked with TSIG. If something is not correct an error is returned. A custom TSIG implementation can be used. This requires additional code to perform any session establishment and signature generation/verification. The client must be configured with an implementation of the TsigProvider interface: Basic use pattern validating and replying to a message that has TSIG set. RFC 6895 sets aside a range of type codes for private use. This range is 65,280 - 65,534 (0xFF00 - 0xFFFE). When experimenting with new Resource Records these can be used, before requesting an official type code from IANA. See https://miek.nl/2014/september/21/idn-and-private-rr-in-go-dns/ for more information. EDNS0 is an extension mechanism for the DNS defined in RFC 2671 and updated by RFC 6891. It defines a new RR type, the OPT RR, which is then completely abused. Basic use pattern for creating an (empty) OPT RR: The rdata of an OPT RR consists out of a slice of EDNS0 (RFC 6891) interfaces. Currently only a few have been standardized: EDNS0_NSID (RFC 5001) and EDNS0_SUBNET (RFC 7871). Note that these options may be combined in an OPT RR. Basic use pattern for a server to check if (and which) options are set: SIG(0) From RFC 2931: It works like TSIG, except that SIG(0) uses public key cryptography, instead of the shared secret approach in TSIG. Supported algorithms: ECDSAP256SHA256, ECDSAP384SHA384, RSASHA1, RSASHA256 and RSASHA512. Signing subsequent messages in multi-message sessions is not implemented.
Package cap provides all the Linux Capabilities userspace library API bindings in native Go. Capabilities are a feature of the Linux kernel that allow fine grain permissions to perform privileged operations. Privileged operations are required to do irregular system level operations from code. You can read more about how Capabilities are intended to work here: This package supports native Go bindings for all the features described in that paper as well as supporting subsequent changes to the kernel for other styles of inheritable Capability. Some simple things you can do with this package are: The "cap" package operates with POSIX semantics for security state. That is all OS threads are kept in sync at all times. The package "kernel.org/pub/linux/libs/security/libcap/psx" is used to implement POSIX semantics system calls that manipulate thread state uniformly over the whole Go (and any CGo linked) process runtime. Note, if the Go runtime syscall interface contains the Linux variant syscall.AllThreadsSyscall() API (it debuted in go1.16 see https://github.com/golang/go/issues/1435 for its history) then the "libcap/psx" package will use that to invoke Capability setting system calls in pure Go binaries. With such an enhanced Go runtime, to force this behavior, use the CGO_ENABLED=0 environment variable. POSIX semantics are more secure than trying to manage privilege at a thread level when those threads share a common memory image as they do under Linux: it is trivial to exploit a vulnerability in one thread of a process to cause execution on any another thread. So, any imbalance in security state, in such cases will readily create an opportunity for a privilege escalation vulnerability. POSIX semantics also work well with Go, which deliberately tries to insulate the user from worrying about the number of OS threads that are actually running in their program. Indeed, Go can efficiently launch and manage tens of thousands of concurrent goroutines without bogging the program or wider system down. It does this by aggressively migrating idle threads to make progress on unblocked goroutines. So, inconsistent security state across OS threads can also lead to program misbehavior. The only exception to this process-wide common security state is the cap.Launcher related functionality. This briefly locks an OS thread to a goroutine in order to launch another executable - the robust implementation of this kind of support is quite subtle, so please read its documentation carefully, if you find that you need it. See https://sites.google.com/site/fullycapable/ for recent updates, some more complete walk-through examples of ways of using 'cap.Set's etc and information on how to file bugs. Copyright (c) 2019-21 Andrew G. Morgan <morgan@kernel.org> The cap and psx packages are licensed with a (you choose) BSD 3-clause or GPL2. See LICENSE file for details.
Package age implements file encryption according to the age-encryption.org/v1 specification. For most use cases, use the Encrypt and Decrypt functions with X25519Recipient and X25519Identity. If passphrase encryption is required, use ScryptRecipient and ScryptIdentity. For compatibility with existing SSH keys use the filippo.io/age/agessh package. age encrypted files are binary and not malleable. For encoding them as text, use the filippo.io/age/armor package. age does not have a global keyring. Instead, since age keys are small, textual, and cheap, you are encouraged to generate dedicated keys for each task and application. Recipient public keys can be passed around as command line flags and in config files, while secret keys should be stored in dedicated files, through secret management systems, or as environment variables. There is no default path for age keys. Instead, they should be stored at application-specific paths. The CLI supports files where private keys are listed one per line, ignoring empty lines and lines starting with "#". These files can be parsed with ParseIdentities. When integrating age into a new system, it's recommended that you only support X25519 keys, and not SSH keys. The latter are supported for manual encryption operations. If you need to tie into existing key management infrastructure, you might want to consider implementing your own Recipient and Identity. Files encrypted with a stable version (not alpha, beta, or release candidate) of age, or with any v1.0.0 beta or release candidate, will decrypt with any later versions of the v1 API. This might change in v2, in which case v1 will be maintained with security fixes for compatibility with older files. If decrypting an older file poses a security risk, doing so might require an explicit opt-in in the API.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package sts provides the API client, operations, and parameter types for AWS Security Token Service. Security Token Service Security Token Service (STS) enables you to request temporary, limited-privilege credentials for users. This guide provides descriptions of the STS API. For more information about using this service, see Temporary Security Credentials (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) .
Package ecr provides the API client, operations, and parameter types for Amazon EC2 Container Registry. Amazon Elastic Container Registry Amazon Elastic Container Registry (Amazon ECR) is a managed container image registry service. Customers can use the familiar Docker CLI, or their preferred client, to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry for your Docker or Open Container Initiative (OCI) images. Amazon ECR supports private repositories with resource-based permissions using IAM so that specific users or Amazon EC2 instances can access repositories and images. Amazon ECR has service endpoints in each supported Region. For more information, see Amazon ECR endpoints (https://docs.aws.amazon.com/general/latest/gr/ecr.html) in the Amazon Web Services General Reference.
Package ec2 provides the API client, operations, and parameter types for Amazon Elastic Compute Cloud. Amazon Elastic Compute Cloud Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable computing capacity in the Amazon Web Services Cloud. Using Amazon EC2 eliminates the need to invest in hardware up front, so you can develop and deploy applications faster. Amazon Virtual Private Cloud (Amazon VPC) enables you to provision a logically isolated section of the Amazon Web Services Cloud where you can launch Amazon Web Services resources in a virtual network that you've defined. Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance and used like a hard drive. To learn more, see the following resources:
Package ecrpublic provides the API client, operations, and parameter types for Amazon Elastic Container Registry Public. Amazon Elastic Container Registry Public Amazon Elastic Container Registry Public (Amazon ECR Public) is a managed container image registry service. Amazon ECR provides both public and private registries to host your container images. You can use the Docker CLI or your preferred client to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry for your Docker or Open Container Initiative (OCI) images. Amazon ECR supports public repositories with this API. For information about the Amazon ECR API for private repositories, see Amazon Elastic Container Registry API Reference (https://docs.aws.amazon.com/AmazonECR/latest/APIReference/Welcome.html) .
Package kms provides the API client, operations, and parameter types for AWS Key Management Service. Key Management Service Key Management Service (KMS) is an encryption and key management web service. This guide describes the KMS operations that you can call programmatically. For general information about KMS, see the Key Management Service Developer Guide (https://docs.aws.amazon.com/kms/latest/developerguide/) . KMS has replaced the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term. Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, macOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to KMS and other Amazon Web Services services. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services (http://aws.amazon.com/tools/) . We recommend that you use the Amazon Web Services SDKs to make programmatic API calls to KMS. If you need to use FIPS 140-2 validated cryptographic modules when communicating with Amazon Web Services, use the FIPS endpoint in your preferred Amazon Web Services Region. For more information about the available FIPS endpoints, see Service endpoints (https://docs.aws.amazon.com/general/latest/gr/kms.html#kms_region) in the Key Management Service topic of the Amazon Web Services General Reference. All KMS API calls must be signed and be transmitted using Transport Layer Security (TLS). KMS recommends you always use the latest supported TLS version. Clients must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Signing Requests Requests must be signed using an access key ID and a secret access key. We strongly recommend that you do not use your Amazon Web Services account root access key ID and secret access key for everyday work. You can use the access key ID and secret access key for an IAM user or you can use the Security Token Service (STS) to generate temporary security credentials and use those to sign requests. All KMS requests must be signed with Signature Version 4 (https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) . Logging API Requests KMS supports CloudTrail, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the CloudTrail User Guide (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/) . Additional Resources For more information about credentials and request signing, see the following: Commonly Used API Operations Of the API operations discussed in this guide, the following will prove the most useful for most applications. You will likely perform operations other than these, such as creating keys and assigning policies, by using the console.
Package ssm provides the API client, operations, and parameter types for Amazon Simple Systems Manager (SSM). Amazon Web Services Systems Manager is the operations hub for your Amazon Web Services applications and resources and a secure end-to-end management solution for hybrid cloud environments that enables safe and secure operations at scale. This reference is intended to be used with the Amazon Web Services Systems Manager User Guide (https://docs.aws.amazon.com/systems-manager/latest/userguide/) . To get started, see Setting up Amazon Web Services Systems Manager (https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up.html) . Related resources
Package iam provides the API client, operations, and parameter types for AWS Identity and Access Management. Identity and Access Management Identity and Access Management (IAM) is a web service for securely controlling access to Amazon Web Services services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which Amazon Web Services resources users and applications can access. For more information about IAM, see Identity and Access Management (IAM) (http://aws.amazon.com/iam/) and the Identity and Access Management User Guide (https://docs.aws.amazon.com/IAM/latest/UserGuide/) .
Package apigateway provides the API client, operations, and parameter types for Amazon API Gateway. Amazon API Gateway Amazon API Gateway helps developers deliver robust, secure, and scalable mobile and web application back ends. API Gateway allows developers to securely connect mobile and web applications to APIs that run on Lambda, Amazon EC2, or other publicly addressable web services that are hosted outside of AWS.
Package securityhub provides the API client, operations, and parameter types for AWS SecurityHub. Security Hub provides you with a comprehensive view of the security state of your Amazon Web Services environment and resources. It also provides you with the readiness status of your environment based on controls from supported security standards. Security Hub collects security data from Amazon Web Services accounts, services, and integrated third-party products and helps you analyze security trends in your environment to identify the highest priority security issues. For more information about Security Hub, see the Security Hub User Guide (https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) . When you use operations in the Security Hub API, the requests are executed only in the Amazon Web Services Region that is currently active or in the specific Amazon Web Services Region that you specify in your request. Any configuration or settings change that results from the operation is applied only to that Region. To make the same change in other Regions, run the same command for each Region in which you want to apply the change. For example, if your Region is set to us-west-2 , when you use CreateMembers to add a member account to Security Hub, the association of the member account with the administrator account is created only in the us-west-2 Region. Security Hub must be enabled for the member account in the same Region that the invitation was sent from. The following throttling limits apply to using Security Hub API operations.
Package memguard implements a secure software enclave for the storage of sensitive information in memory. There are two main container objects exposed in this API. Enclave objects encrypt data and store the ciphertext whereas LockedBuffers are more like guarded memory allocations. There is a limit on the maximum number of LockedBuffer objects that can exist at any one time, imposed by the system's mlock limits. There is no limit on Enclaves. The general workflow is to store sensitive information in Enclaves when it is not immediately needed and decrypt it when and where it is. After use, the LockedBuffer should be destroyed. If you need access to the data inside a LockedBuffer in a type not covered by any methods provided by this API, you can type-cast the allocation's memory to whatever type you want. This is of course an unsafe operation and so care must be taken to ensure that the cast is valid and does not result in memory unsafety. Further examples of code and interesting use-cases can be found in the examples subpackage. Several functions exist to make the mass purging of data very easy. It is recommended to make use of them when appropriate. Core dumps are disabled by default. If you absolutely require them, you can enable them by using unix.Setrlimit to set RLIMIT_CORE to an appropriate value.
Package csrf (gorilla/csrf) provides Cross Site Request Forgery (CSRF) prevention middleware for Go web applications & services. It includes: * The `csrf.Protect` middleware/handler provides CSRF protection on routes attached to a router or a sub-router. * A `csrf.Token` function that provides the token to pass into your response, whether that be a HTML form or a JSON response body. * ... and a `csrf.TemplateField` helper that you can pass into your `html/template` templates to replace a `{{ .csrfField }}` template tag with a hidden input field. gorilla/csrf is easy to use: add the middleware to individual handlers with the below: ... and then collect the token with `csrf.Token(r)` before passing it to the template, JSON body or HTTP header (you pick!). gorilla/csrf inspects the form body (first) and HTTP headers (second) on subsequent POST/PUT/PATCH/DELETE/etc. requests for the token. Note that the authentication key passed to `csrf.Protect([]byte(key))` should be 32-bytes long and persist across application restarts. Generating a random key won't allow you to authenticate existing cookies and will break your CSRF validation. Here's the common use-case: HTML forms you want to provide CSRF protection for, in order to protect malicious POST requests being made: Note that the CSRF middleware will (by necessity) consume the request body if the token is passed via POST form values. If you need to consume this in your handler, insert your own middleware earlier in the chain to capture the request body. You can also send the CSRF token in the response header. This approach is useful if you're using a front-end JavaScript framework like Ember or Angular, or are providing a JSON API: If you're writing a client that's supposed to mimic browser behavior, make sure to send back the CSRF cookie (the default name is _gorilla_csrf, but this can be changed with the CookieName Option) along with either the X-CSRF-Token header or the gorilla.csrf.Token form field. In addition: getting CSRF protection right is important, so here's some background: * This library generates unique-per-request (masked) tokens as a mitigation against the BREACH attack (http://breachattack.com/). * The 'base' (unmasked) token is stored in the session, which means that multiple browser tabs won't cause a user problems as their per-request token is compared with the base token. * Operates on a "whitelist only" approach where safe (non-mutating) HTTP methods (GET, HEAD, OPTIONS, TRACE) are the *only* methods where token validation is not enforced. * The design is based on the battle-tested Django (https://docs.djangoproject.com/en/1.8/ref/csrf/) and Ruby on Rails (http://api.rubyonrails.org/classes/ActionController/RequestForgeryProtection.html) approaches. * Cookies are authenticated and based on the securecookie (https://github.com/gorilla/securecookie) library. They're also Secure (issued over HTTPS only) and are HttpOnly by default, because sane defaults are important. * Go's `crypto/rand` library is used to generate the 32 byte (256 bit) tokens and the one-time-pad used for masking them. This library does not seek to be adventurous.
Package neptune provides the API client, operations, and parameter types for Amazon Neptune. Amazon Neptune Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Amazon Neptune supports popular graph models Property Graph and W3C's RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. This interface reference for Amazon Neptune contains documentation for a programming or command line interface you can use to manage Amazon Neptune. Note that Amazon Neptune is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide.
Package configservice provides the API client, operations, and parameter types for AWS Config. Config Config provides a way to keep track of the configurations of all the Amazon Web Services resources associated with your Amazon Web Services account. You can use Config to get the current and historical configurations of each Amazon Web Services resource and also to get information about the relationship between the resources. An Amazon Web Services resource can be an Amazon Compute Cloud (Amazon EC2) instance, an Elastic Block Store (EBS) volume, an elastic network Interface (ENI), or a security group. For a complete list of resources currently supported by Config, see Supported Amazon Web Services resources (https://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html#supported-resources) . You can access and manage Config through the Amazon Web Services Management Console, the Amazon Web Services Command Line Interface (Amazon Web Services CLI), the Config API, or the Amazon Web Services SDKs for Config. This reference guide contains documentation for the Config API and the Amazon Web Services CLI commands that you can use to manage Config. The Config API uses the Signature Version 4 protocol for signing requests. For more information about how to sign a request with this protocol, see Signature Version 4 Signing Process (https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) . For detailed information about Config features and their associated actions or commands, as well as how to work with Amazon Web Services Management Console, see What Is Config (https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) in the Config Developer Guide.
Package graphql-go-tools is library to create GraphQL services using the go programming language. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Source: https://graphql.org This library is intended to be a set of low level building blocks to write high performance and secure GraphQL applications. Use cases could range from writing layer seven GraphQL proxies, firewalls, caches etc.. You would usually not use this library to write a GraphQL server yourself but to build tools for the GraphQL ecosystem. To achieve this goal the library has zero dependencies at its core functionality. It has a full implementation of the GraphQL AST and supports lexing, parsing, validation, normalization, introspection, query planning as well as query execution etc. With the execution package it's possible to write a fully functional GraphQL server that is capable to mediate between various protocols and formats. In it's current state you can use the following DataSources to resolve fields: - Static data (embed static data into a schema to extend a field in a simple way) - HTTP JSON APIs (combine multiple Restful APIs into one single GraphQL Endpoint, nesting is possible) - GraphQL APIs (you can combine multiple GraphQL APIs into one single GraphQL Endpoint, nesting is possible) - Webassembly/WASM Lambdas (e.g. resolve a field using a Rust lambda) If you're looking for a ready to use solution that has all this functionality packaged as a Gateway have a look at: https://github.com/jensneuse/graphql-gateway Created by Jens Neuse
Package guardduty provides the API client, operations, and parameter types for Amazon GuardDuty. Amazon GuardDuty is a continuous security monitoring service that analyzes and processes the following data sources: VPC flow logs, Amazon Web Services CloudTrail management event logs, CloudTrail S3 data event logs, EKS audit logs, DNS logs, and Amazon EBS volume data. It uses threat intelligence feeds, such as lists of malicious IPs and domains, and machine learning to identify unexpected, potentially unauthorized, and malicious activity within your Amazon Web Services environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IPs, domains, or presence of malware on your Amazon EC2 instances and container workloads. For example, GuardDuty can detect compromised EC2 instances and container workloads serving malware, or mining bitcoin. GuardDuty also monitors Amazon Web Services account access behavior for signs of compromise, such as unauthorized infrastructure deployments like EC2 instances deployed in a Region that has never been used, or unusual API calls like a password policy change to reduce password strength. GuardDuty informs you about the status of your Amazon Web Services environment by producing security findings that you can view in the GuardDuty console or through Amazon EventBridge. For more information, see the Amazon GuardDuty User Guide (https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) .
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This library is offered as-is, and without a guarantee. It will need an independent security review before it should be considered ready for use in security-critical applications. If you integrate Kyber into your application it is YOUR RESPONSIBILITY to arrange for that audit. If you notice a possible security problem, please report it to dedis-security@epfl.ch.
Package rolesanywhere provides the API client, operations, and parameter types for IAM Roles Anywhere. Identity and Access Management Roles Anywhere provides a secure way for your workloads such as servers, containers, and applications that run outside of Amazon Web Services to obtain temporary Amazon Web Services credentials. Your workloads can use the same IAM policies and roles you have for native Amazon Web Services applications to access Amazon Web Services resources. Using IAM Roles Anywhere eliminates the need to manage long-term credentials for workloads running outside of Amazon Web Services. To use IAM Roles Anywhere, your workloads must use X.509 certificates issued by their certificate authority (CA). You register the CA with IAM Roles Anywhere as a trust anchor to establish trust between your public key infrastructure (PKI) and IAM Roles Anywhere. If you don't manage your own PKI system, you can use Private Certificate Authority to create a CA and then use that to establish trust with IAM Roles Anywhere. This guide describes the IAM Roles Anywhere operations that you can call programmatically. For more information about IAM Roles Anywhere, see the IAM Roles Anywhere User Guide (https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) .
Package inspector2 provides the API client, operations, and parameter types for Inspector2. Amazon Inspector is a vulnerability discovery service that automates continuous scanning for security vulnerabilities within your Amazon EC2 and Amazon ECR environments.
Package iot provides the API client, operations, and parameter types for AWS IoT. IoT IoT provides secure, bi-directional communication between Internet-connected devices (such as sensors, actuators, embedded devices, or smart appliances) and the Amazon Web Services cloud. You can discover your custom IoT-Data endpoint to communicate with, configure rules for data processing and integration with other services, organize resources associated with each device (Registry), configure logging, and create and manage policies and credentials to authenticate devices. The service endpoints that expose this API are listed in Amazon Web Services IoT Core Endpoints and Quotas (https://docs.aws.amazon.com/general/latest/gr/iot-core.html) . You must use the endpoint for the region that has the resources you want to access. The service name used by Amazon Web Services Signature Version 4 (https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) to sign the request is: execute-api. For more information about how IoT works, see the Developer Guide (https://docs.aws.amazon.com/iot/latest/developerguide/aws-iot-how-it-works.html) . For information about how to use the credentials provider for IoT, see Authorizing Direct Calls to Amazon Web Services Services (https://docs.aws.amazon.com/iot/latest/developerguide/authorizing-direct-aws.html) .
Package cognitoidentity provides the API client, operations, and parameter types for Amazon Cognito Identity. Amazon Cognito Federated Identities Amazon Cognito Federated Identities is a web service that delivers scoped temporary credentials to mobile devices and other untrusted environments. It uniquely identifies a device and supplies the user with a consistent identity over the lifetime of an application. Using Amazon Cognito Federated Identities, you can enable authentication with one or more third-party identity providers (Facebook, Google, or Login with Amazon) or an Amazon Cognito user pool, and you can also choose to support unauthenticated access from your app. Cognito delivers a unique identifier for each user and acts as an OpenID token provider trusted by AWS Security Token Service (STS) to access temporary, limited-privilege AWS credentials. For a description of the authentication flow from the Amazon Cognito Developer Guide see Authentication Flow (https://docs.aws.amazon.com/cognito/latest/developerguide/authentication-flow.html) . For more information see Amazon Cognito Federated Identities (https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html) .
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY | string | string ------------------------------------------------------------------------------------------------------------------- OBJECT | string | string ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For exmaple, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For exmaple, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: ## Using PUT The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. ## Using GET The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: ## Specifying temporary directory for encryption and compression Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example:
Package glacier provides the API client, operations, and parameter types for Amazon Glacier. Amazon S3 Glacier (Glacier) is a storage solution for "cold data." Glacier is an extremely low-cost storage service that provides secure, durable, and easy-to-use storage for data backup and archival. With Glacier, customers can store their data cost effectively for months, years, or decades. Glacier also enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations. Glacier is a great storage choice when low storage cost is paramount and your data is rarely retrieved. If your application requires fast or frequent access to your data, consider using Amazon S3. For more information, see Amazon Simple Storage Service (Amazon S3) (http://aws.amazon.com/s3/) . You can store any kind of data in any format. There is no maximum limit on the total amount of data you can store in Glacier. If you are a first-time user of Glacier, we recommend that you begin by reading the following sections in the Amazon S3 Glacier Developer Guide:
Package inspector provides the API client, operations, and parameter types for Amazon Inspector. Amazon Inspector Amazon Inspector enables you to analyze the behavior of your AWS resources and to identify potential security issues. For more information, see Amazon Inspector User Guide (https://docs.aws.amazon.com/inspector/latest/userguide/inspector_introduction.html) .
Package ssoadmin provides the API client, operations, and parameter types for AWS Single Sign-On Admin. IAM Identity Center (successor to Single Sign-On) helps you securely create, or connect, your workforce identities and manage their access centrally across Amazon Web Services accounts and applications. IAM Identity Center is the recommended approach for workforce authentication and authorization in Amazon Web Services, for organizations of any size and type. IAM Identity Center uses the sso and identitystore API namespaces. This reference guide provides information on single sign-on operations which could be used for access management of Amazon Web Services accounts. For information about IAM Identity Center features, see the IAM Identity Center User Guide (https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) . Many operations in the IAM Identity Center APIs rely on identifiers for users and groups, known as principals. For more information about how to work with principals and principal IDs in IAM Identity Center, see the Identity Store API Reference (https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/welcome.html) . Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, iOS, Android, and more). The SDKs provide a convenient way to create programmatic access to IAM Identity Center and other Amazon Web Services services. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services (http://aws.amazon.com/tools/) .
Package ec2instanceconnect provides the API client, operations, and parameter types for AWS EC2 Instance Connect. Amazon EC2 Instance Connect enables system administrators to publish one-time use SSH public keys to EC2, providing users a simple and secure way to connect to their instances.
Package ram provides the API client, operations, and parameter types for AWS Resource Access Manager. This is the Resource Access Manager API Reference. This documentation provides descriptions and syntax for each of the actions and data types in RAM. RAM is a service that helps you securely share your Amazon Web Services resources to other Amazon Web Services accounts. If you use Organizations to manage your accounts, then you can share your resources with your entire organization or to organizational units (OUs). For supported resource types, you can also share resources with individual Identity and Access Management (IAM) roles and users. To learn more about RAM, see the following resources:
Package sio implements the DARE format. It provides an API for secure en/decrypting IO operations using io.Reader and io.Writer.
Package transfer provides the API client, operations, and parameter types for AWS Transfer Family. Transfer Family is a fully managed service that enables the transfer of files over the File Transfer Protocol (FTP), File Transfer Protocol over SSL (FTPS), or Secure Shell (SSH) File Transfer Protocol (SFTP) directly into and out of Amazon Simple Storage Service (Amazon S3) or Amazon EFS. Additionally, you can use Applicability Statement 2 (AS2) to transfer files into and out of Amazon S3. Amazon Web Services helps you seamlessly migrate your file transfer workflows to Transfer Family by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3, you can use it with Amazon Web Services for processing, analytics, machine learning, and archiving. Getting started with Transfer Family is easy since there is no infrastructure to buy and set up.
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package firecracker provides a library to interact with the Firecracker API. Firecracker is an open-source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services. See https://firecracker-microvm.github.io/ for more details. This library requires Go 1.11 and can be used with Go modules. BUG(aws): There are some Firecracker features that are not yet supported by the SDK. These are tracked as GitHub issues with the firecracker-feature label: https://github.com/firecracker-microvm/firecracker-go-sdk/issues?q=is%3Aissue+is%3Aopen+label%3Afirecracker-feature This library is licensed under the Apache 2.0 License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Package apprunner provides the API client, operations, and parameter types for AWS App Runner. App Runner App Runner is an application service that provides a fast, simple, and cost-effective way to go directly from an existing container image or source code to a running service in the Amazon Web Services Cloud in seconds. You don't need to learn new technologies, decide which compute service to use, or understand how to provision and configure Amazon Web Services resources. App Runner connects directly to your container registry or source code repository. It provides an automatic delivery pipeline with fully managed operations, high performance, scalability, and security. For more information about App Runner, see the App Runner Developer Guide (https://docs.aws.amazon.com/apprunner/latest/dg/) . For release information, see the App Runner Release Notes (https://docs.aws.amazon.com/apprunner/latest/relnotes/) . To install the Software Development Kits (SDKs), Integrated Development Environment (IDE) Toolkits, and command line tools that you can use to access the API, see Tools for Amazon Web Services (http://aws.amazon.com/tools/) . Endpoints For a list of Region-specific endpoints that App Runner supports, see App Runner endpoints and quotas (https://docs.aws.amazon.com/general/latest/gr/apprunner.html) in the Amazon Web Services General Reference.
Package health provides the API client, operations, and parameter types for AWS Health APIs and Notifications. Health The Health API provides access to the Health information that appears in the Health Dashboard (https://health.aws.amazon.com/health/home) . You can use the API operations to get information about events that might affect your Amazon Web Services and resources. You must have a Business, Enterprise On-Ramp, or Enterprise Support plan from Amazon Web Services Support (http://aws.amazon.com/premiumsupport/) to use the Health API. If you call the Health API from an Amazon Web Services account that doesn't have a Business, Enterprise On-Ramp, or Enterprise Support plan, you receive a SubscriptionRequiredException error. For API access, you need an access key ID and a secret access key. Use temporary credentials instead of long-term access keys when possible. Temporary credentials include an access key ID, a secret access key, and a security token that indicates when the credentials expire. For more information, see Best practices for managing Amazon Web Services access keys (https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html) in the Amazon Web Services General Reference. You can use the Health endpoint health.us-east-1.amazonaws.com (HTTPS) to call the Health API operations. Health supports a multi-Region application architecture and has two regional endpoints in an active-passive configuration. You can use the high availability endpoint example to determine which Amazon Web Services Region is active, so that you can get the latest information from the API. For more information, see Accessing the Health API (https://docs.aws.amazon.com/health/latest/ug/health-api.html) in the Health User Guide. For authentication of requests, Health uses the Signature Version 4 Signing Process (https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) . If your Amazon Web Services account is part of Organizations, you can use the Health organizational view feature. This feature provides a centralized view of Health events across all accounts in your organization. You can aggregate Health events in real time to identify accounts in your organization that are affected by an operational event or get notified of security vulnerabilities. Use the organizational view API operations to enable this feature and return event information. For more information, see Aggregating Health events (https://docs.aws.amazon.com/health/latest/ug/aggregate-events.html) in the Health User Guide. When you use the Health API operations to return Health events, see the following recommendations:
Package mailyak provides a simple interface for generating MIME compliant emails, and optionally sending them over SMTP. Both plain-text and HTML email body content is supported, and their types implement io.Writer allowing easy composition directly from templating engines, etc. Attachments are fully supported including inline attachments, with anything that implements io.Reader suitable as a source (like files on disk, in-memory buffers, etc). The raw MIME content can be retrieved using MimeBuf(), typically used with an API service such as Amazon SES that does not require using an SMTP interface. MailYak supports both plain-text SMTP (which is automatically upgraded to a secure connection with STARTTLS if supported by the SMTP server) and explicit TLS connections.
Package datasync provides the API client, operations, and parameter types for AWS DataSync. DataSync DataSync is an online data movement and discovery service that simplifies data migration and helps you quickly, easily, and securely transfer your file or object data to, from, and between Amazon Web Services storage services. This API interface reference includes documentation for using DataSync programmatically. For complete information, see the DataSync User Guide (https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html) .
Package ivschat provides the API client, operations, and parameter types for Amazon Interactive Video Service Chat. Introduction The Amazon IVS Chat control-plane API enables you to create and manage Amazon IVS Chat resources. You also need to integrate with the Amazon IVS Chat Messaging API (https://docs.aws.amazon.com/ivs/latest/chatmsgapireference/chat-messaging-api.html) , to enable users to interact with chat rooms in real time. The API is an AWS regional service. For a list of supported regions and Amazon IVS Chat HTTPS service endpoints, see the Amazon IVS Chat information on the Amazon IVS page (https://docs.aws.amazon.com/general/latest/gr/ivs.html) in the AWS General Reference. Notes on terminology: Resources The following resources are part of Amazon IVS Chat: Tagging A tag is a metadata label that you assign to an AWS resource. A tag comprises a key and a value, both set by you. For example, you might set a tag as topic:nature to label a particular video category. See Tagging AWS Resources (https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) for more information, including restrictions that apply to tags and "Tag naming limits and requirements"; Amazon IVS Chat has no service-specific constraints beyond what is documented there. Tags can help you identify and organize your AWS resources. For example, you can use the same tag for different resources to indicate that they are related. You can also use tags to manage access (see Access Tags (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html) ). The Amazon IVS Chat API has these tag-related endpoints: TagResource , UntagResource , and ListTagsForResource . The following resource supports tagging: Room. At most 50 tags can be applied to a resource. API Access Security Your Amazon IVS Chat applications (service applications and clients) must be authenticated and authorized to access Amazon IVS Chat resources. Note the differences between these concepts: Users (viewers) connect to a room using secure access tokens that you create using the CreateChatToken endpoint through the AWS SDK. You call CreateChatToken for every user’s chat session, passing identity and authorization information about the user. Signing API Requests HTTP API requests must be signed with an AWS SigV4 signature using your AWS security credentials. The AWS Command Line Interface (CLI) and the AWS SDKs take care of signing the underlying API calls for you. However, if your application calls the Amazon IVS Chat HTTP API directly, it’s your responsibility to sign the requests. You generate a signature using valid AWS credentials for an IAM role that has permission to perform the requested action. For example, DeleteMessage requests must be made using an IAM role that has the ivschat:DeleteMessage permission. For more information: Amazon Resource Names (ARNs) ARNs uniquely identify AWS resources. An ARN is required when you need to specify a resource unambiguously across all of AWS, such as in IAM policies and API calls. For more information, see Amazon Resource Names (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) in the AWS General Reference. Messaging Endpoints Chat Token Endpoint Room Endpoints Logging Configuration Endpoints Tags Endpoints All the above are HTTP operations. There is a separate messaging API for managing Chat resources; see the Amazon IVS Chat Messaging API Reference (https://docs.aws.amazon.com/ivs/latest/chatmsgapireference/chat-messaging-api.html) .
Package imagebuilder provides the API client, operations, and parameter types for EC2 Image Builder. EC2 Image Builder is a fully managed Amazon Web Services service that makes it easier to automate the creation, management, and deployment of customized, secure, and up-to-date "golden" server images that are pre-installed and pre-configured with software and settings to meet specific IT standards.
Package storagegateway provides the API client, operations, and parameter types for AWS Storage Gateway. Storage Gateway Service Storage Gateway is the service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization's on-premises IT environment and the Amazon Web Services storage infrastructure. The service enables you to securely upload data to the Amazon Web Services Cloud for cost effective backup and rapid disaster recovery. Use the following links to get started using the Storage Gateway Service API Reference: Storage Gateway resource IDs are in uppercase. When you use these resource IDs with the Amazon EC2 API, EC2 expects resource IDs in lowercase. You must change your resource ID to lowercase to use it with the EC2 API. For example, in Storage Gateway the ID for a volume might be vol-AA22BB012345DAF670 . When you use this ID with the EC2 API, you must change it to vol-aa22bb012345daf670 . Otherwise, the EC2 API might not behave as expected. IDs for Storage Gateway volumes and Amazon EBS snapshots created from gateway volumes are changing to a longer format. Starting in December 2016, all new volumes and snapshots will be created with a 17-character string. Starting in April 2016, you will be able to use these longer IDs so you can test your systems with the new format. For more information, see Longer EC2 and EBS resource IDs (http://aws.amazon.com/ec2/faqs/#longer-ids) . For example, a volume Amazon Resource Name (ARN) with the longer volume ID format looks like the following: arn:aws:storagegateway:us-west-2:111122223333:gateway/sgw-12A3456B/volume/vol-1122AABBCCDDEEFFG . A snapshot ID with the longer ID format looks like the following: snap-78e226633445566ee . For more information, see Announcement: Heads-up – Longer Storage Gateway volume and snapshot IDs coming in 2016 (http://forums.aws.amazon.com/ann.jspa?annID=3557) .
Package opensearchserverless provides the API client, operations, and parameter types for OpenSearch Service Serverless. Use the Amazon OpenSearch Serverless API to create, configure, and manage OpenSearch Serverless collections and security policies. OpenSearch Serverless is an on-demand, pre-provisioned serverless configuration for Amazon OpenSearch Service. OpenSearch Serverless removes the operational complexities of provisioning, configuring, and tuning your OpenSearch clusters. It enables you to easily search and analyze petabytes of data without having to worry about the underlying infrastructure and data management. To learn more about OpenSearch Serverless, see What is Amazon OpenSearch Serverless? (https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-overview.html)
Package worklink provides the API client, operations, and parameter types for Amazon WorkLink. Amazon WorkLink is a cloud-based service that provides secure access to internal websites and web apps from iOS and Android phones. In a single step, your users, such as employees, can access internal websites as efficiently as they access any other public website. They enter a URL in their web browser, or choose a link to an internal website in an email. Amazon WorkLink authenticates the user's access and securely renders authorized internal web content in a secure rendering service in the AWS cloud. Amazon WorkLink doesn't download or store any internal web content on mobile devices.
Package grafana provides the API client, operations, and parameter types for Amazon Managed Grafana. Amazon Managed Grafana is a fully managed and secure data visualization service that you can use to instantly query, correlate, and visualize operational metrics, logs, and traces from multiple sources. Amazon Managed Grafana makes it easy to deploy, operate, and scale Grafana, a widely deployed data visualization tool that is popular for its extensible data support. With Amazon Managed Grafana, you create logically isolated Grafana servers called workspaces. In a workspace, you can create Grafana dashboards and visualizations to analyze your metrics, logs, and traces without having to build, package, or deploy any hardware to run Grafana servers.
This is a GSSAPI provider for Go, which expects to be initialized with the name of a dynamically loadable module which can be dlopen'd to get at a C language binding GSSAPI library. The GSSAPI concepts are explained in RFC 2743, "Generic Security Service Application Program Interface Version 2, Update 1". The API calls for C, together with a number of values for constants, come from RFC 2744, "Generic Security Service API Version 2 : C-bindings". Note that the basic GSSAPI bindings for C use the Latin-1 character set. UTF-8 interfaces are specified in RFC 5178, "Generic Security Service Application Program Interface (GSS-API) Internationalization and Domain-Based Service Names and Name Type", in 2008. Looking in 2013, this API does not appear to be provided by either MIT or Heimdal. This API applies solely to hostnames though, which can also be supplied in ACE encoding, bypassing the issue. For now, we assume that hostnames and usercodes are all ASCII-ish and pass UTF-8 into the library. Patches for more comprehensive support welcome.
Package appstream provides the API client, operations, and parameter types for Amazon AppStream. Amazon AppStream 2.0 This is the Amazon AppStream 2.0 API Reference. This documentation provides descriptions and syntax for each of the actions and data types in AppStream 2.0. AppStream 2.0 is a fully managed, secure application streaming service that lets you stream desktop applications to users without rewriting applications. AppStream 2.0 manages the AWS resources that are required to host and run your applications, scales automatically, and provides access to your users on demand. You can call the AppStream 2.0 API operations by using an interface VPC endpoint (interface endpoint). For more information, see Access AppStream 2.0 API Operations and CLI Commands Through an Interface VPC Endpoint (https://docs.aws.amazon.com/appstream2/latest/developerguide/access-api-cli-through-interface-vpc-endpoint.html) in the Amazon AppStream 2.0 Administration Guide. To learn more about AppStream 2.0, see the following resources:
Package golangNeo4jBoltDriver implements a driver for the Neo4J Bolt Protocol. The driver is compatible with Golang's sql.driver interface, but aims to implement a more complete featureset in line with what Neo4J and Bolt provides. As such, there are multiple interfaces the user can choose from. It's highly recommended that the user use the Neo4J-specific interfaces as they are more flexible and efficient than the provided sql.driver compatible methods. The interface tries to be consistent throughout. The sql.driver interfaces are standard, but the Neo4J-specific ones contain a naming convention of either "Neo" or "Pipeline". The "Neo" ones are the basic interfaces for making queries to Neo4j and it's expected that these would be used the most. The "Pipeline" ones are to support Bolt's pipelining features. Pipelines allow the user to send Neo4j many queries at once and have them executed by the database concurrently. This is useful if you have a bunch of queries that aren't necessarily dependant on one another, and you want to get better performance. The internal APIs will also pipeline statements where it is able to reliably do so, but by manually using the pipelining feature you can maximize your throughput. The API provides connection pooling using the `NewDriverPool` method. This allows you to pass it the maximum number of open connections to be used in the pool. Once this limit is hit, any new clients will have to wait for a connection to become available again. The sql driver is registered as "neo4j-bolt". The sql.driver interface is much more limited than what bolt and neo4j supports. In some cases, concessions were made in order to make that interface work with the neo4j way of doing things. The main instance of this is the marshalling of objects to/from the sql.driver.Value interface. In order to support object types that aren't supported by this interface, the internal encoding package is used to marshal these objects to byte strings. This ultimately makes for a less efficient and more 'clunky' implementation. A glaring instance of this is passing parameters. Neo4j expects named parameters but the driver interface can only really support positional parameters. To get around this, the user must create a map[string]interface{} of their parameters and marshal it to a driver.Value using the encoding.Marshal function. Similarly, the user must unmarshal data returned from the queries using the encoding.Unmarshal function, then use type assertions to retrieve the proper type. In most cases the driver will return the data from neo as the proper go-specific types. For integers they always come back as int64 and floats always come back as float64. This is for the convenience of the user and acts similarly to go's JSON interface. This prevents the user from having to use reflection to get these values. Internally, the types are always transmitted over the wire with as few bytes as possible. There are also cases where no go-specific type matches the returned values, such as when you query for a node, relationship, or path. The driver exposes specific structs which represent this data in the 'structures.graph' package. There are 4 types - Node, Relationship, UnboundRelationship, and Path. The driver returns interface{} objects which must have their types properly asserted to get the data out. There are some limitations to the types of collections the driver supports. Specifically, maps should always be of type map[string]interface{} and lists should always be of type []interface{}. It doesn't seem that the Bolt protocol supports uint64 either, so the biggest number it can send right now is the int64 max. The URL format is: `bolt://(user):(password)@(host):(port)` Schema must be `bolt`. User and password is only necessary if you are authenticating. TLS is supported by using query parameters on the connection string, like so: `bolt://host:port?tls=true&tls_no_verify=false` The supported query params are: * timeout - the number of seconds to set the connection timeout to. Defaults to 60 seconds. * tls - Set to 'true' or '1' if you want to use TLS encryption * tls_no_verify - Set to 'true' or '1' if you want to accept any server certificate (for testing, not secure) * tls_ca_cert_file - path to a custom ca cert for a self-signed TLS cert * tls_cert_file - path to a cert file for this client (need to verify this is processed by Neo4j) * tls_key_file - path to a key file for this client (need to verify this is processed by Neo4j) Errors returned from the API support wrapping, so if you receive an error from the library, it might be wrapping other errors. You can get the innermost error by using the `InnerMost` method. Failure messages from Neo4J are reported, along with their metadata, as an error. In order to get the failure message metadata from a wrapped error, you can do so by calling `err.(*errors.Error).InnerMost().(messages.FailureMessage).Metadata` If there is an error with the database connection, you should get a sql/driver ErrBadConn as per the best practice recommendations of the Golang SQL Driver. However, this error may be wrapped, so you might have to call `InnerMost` to get it, as specified above.
Package synthetics provides the API client, operations, and parameter types for Synthetics. Amazon CloudWatch Synthetics You can use Amazon CloudWatch Synthetics to continually monitor your services. You can create and manage canaries, which are modular, lightweight scripts that monitor your endpoints and APIs from the outside-in. You can set up your canaries to run 24 hours a day, once per minute. The canaries help you check the availability and latency of your web services and troubleshoot anomalies by investigating load time data, screenshots of the UI, logs, and metrics. The canaries seamlessly integrate with CloudWatch ServiceLens to help you trace the causes of impacted nodes in your applications. For more information, see Using ServiceLens to Monitor the Health of Your Applications (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ServiceLens.html) in the Amazon CloudWatch User Guide. Before you create and manage canaries, be aware of the security considerations. For more information, see Security Considerations for Synthetics Canaries (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/servicelens_canaries_security.html) .
Package snowball provides the API client, operations, and parameter types for Amazon Import/Export Snowball. The Amazon Web Services Snow Family provides a petabyte-scale data transport solution that uses secure devices to transfer large amounts of data between your on-premises data centers and Amazon Simple Storage Service (Amazon S3). The Snow Family commands described here provide access to the same functionality that is available in the Amazon Web Services Snow Family Management Console, which enables you to create and manage jobs for a Snow Family device. To transfer data locally with a Snow Family device, you'll need to use the Snowball Edge client or the Amazon S3 API Interface for Snowball or OpsHub for Snow Family. For more information, see the User Guide (https://docs.aws.amazon.com/AWSImportExport/latest/ug/api-reference.html) .
Package securitylake provides the API client, operations, and parameter types for Amazon Security Lake. Amazon Security Lake is a fully managed security data lake service. You can use Security Lake to automatically centralize security data from cloud, on-premises, and custom sources into a data lake that's stored in your Amazon Web Services account. Amazon Web Services Organizations is an account management service that lets you consolidate multiple Amazon Web Services accounts into an organization that you create and centrally manage. With Organizations, you can create member accounts and invite existing accounts to join your organization. Security Lake helps you analyze security data for a more complete understanding of your security posture across the entire organization. It can also help you improve the protection of your workloads, applications, and data. The data lake is backed by Amazon Simple Storage Service (Amazon S3) buckets, and you retain ownership over your data. Amazon Security Lake integrates with CloudTrail, a service that provides a record of actions taken by a user, role, or an Amazon Web Services service. In Security Lake, CloudTrail captures API calls for Security Lake as events. The calls captured include calls from the Security Lake console and code calls to the Security Lake API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Security Lake. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in Event history. Using the information collected by CloudTrail you can determine the request that was made to Security Lake, the IP address from which the request was made, who made the request, when it was made, and additional details. To learn more about Security Lake information in CloudTrail, see the Amazon Security Lake User Guide (https://docs.aws.amazon.com/security-lake/latest/userguide/securitylake-cloudtrail.html) . Security Lake automates the collection of security-related log and event data from integrated Amazon Web Services and third-party services. It also helps you manage the lifecycle of data with customizable retention and replication settings. Security Lake converts ingested data into Apache Parquet format and a standard open-source schema called the Open Cybersecurity Schema Framework (OCSF). Other Amazon Web Services and third-party services can subscribe to the data that's stored in Security Lake for incident response and security data analytics.