Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package firecracker provides a library to interact with the Firecracker API. Firecracker is an open-source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services. See https://firecracker-microvm.github.io/ for more details. This library requires Go 1.23 or later and can be used with Go modules. BUG(aws): There are some Firecracker features that are not yet supported by the SDK. These are tracked as GitHub issues with the firecracker-feature label: https://github.com/firecracker-microvm/firecracker-go-sdk/issues?q=is%3Aissue+is%3Aopen+label%3Afirecracker-feature This library is licensed under the Apache 2.0 License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Package dns implements a full featured interface to the Domain Name System. Both server- and client-side programming is supported. The package allows complete control over what is sent out to the DNS. The API follows the less-is-more principle, by presenting a small, clean interface. It supports (asynchronous) querying/replying, incoming/outgoing zone transfers, TSIG, EDNS0, dynamic updates, notifies and DNSSEC validation/signing. Note that domain names MUST be fully qualified before sending them, unqualified names in a message will result in a packing failure. Resource records are native types. They are not stored in wire format. Basic usage pattern for creating a new resource record: Or directly from a string: Or when the default origin (.) and TTL (3600) and class (IN) suit you: Or even: In the DNS messages are exchanged, these messages contain resource records (sets). Use pattern for creating a message: Or when not certain if the domain name is fully qualified: The message m is now a message with the question section set to ask the MX records for the miek.nl. zone. The following is slightly more verbose, but more flexible: After creating a message it can be sent. Basic use pattern for synchronous querying the DNS at a server configured on 127.0.0.1 and port 53: Suppressing multiple outstanding queries (with the same question, type and class) is as easy as setting: More advanced options are available using a net.Dialer and the corresponding API. For example it is possible to set a timeout, or to specify a source IP address and port to use for the connection: If these "advanced" features are not needed, a simple UDP query can be sent, with: When this functions returns you will get dns message. A dns message consists out of four sections. The question section: in.Question, the answer section: in.Answer, the authority section: in.Ns and the additional section: in.Extra. Each of these sections (except the Question section) contain a []RR. Basic use pattern for accessing the rdata of a TXT RR as the first RR in the Answer section: Both domain names and TXT character strings are converted to presentation form both when unpacked and when converted to strings. For TXT character strings, tabs, carriage returns and line feeds will be converted to \t, \r and \n respectively. Back slashes and quotations marks will be escaped. Bytes below 32 and above 127 will be converted to \DDD form. For domain names, in addition to the above rules brackets, periods, spaces, semicolons and the at symbol are escaped. DNSSEC (DNS Security Extension) adds a layer of security to the DNS. It uses public key cryptography to sign resource records. The public keys are stored in DNSKEY records and the signatures in RRSIG records. Requesting DNSSEC information for a zone is done by adding the DO (DNSSEC OK) bit to a request. Signature generation, signature verification and key generation are all supported. Dynamic updates reuses the DNS message format, but renames three of the sections. Question is Zone, Answer is Prerequisite, Authority is Update, only the Additional is not renamed. See RFC 2136 for the gory details. You can set a rather complex set of rules for the existence of absence of certain resource records or names in a zone to specify if resource records should be added or removed. The table from RFC 2136 supplemented with the Go DNS function shows which functions exist to specify the prerequisites. The prerequisite section can also be left empty. If you have decided on the prerequisites you can tell what RRs should be added or deleted. The next table shows the options you have and what functions to call. An TSIG or transaction signature adds a HMAC TSIG record to each message sent. The supported algorithms include: HmacMD5, HmacSHA1, HmacSHA256 and HmacSHA512. Basic use pattern when querying with a TSIG name "axfr." (note that these key names must be fully qualified - as they are domain names) and the base64 secret "so6ZGir4GPAqINNh9U5c3A==": If an incoming message contains a TSIG record it MUST be the last record in the additional section (RFC2845 3.2). This means that you should make the call to SetTsig last, right before executing the query. If you make any changes to the RRset after calling SetTsig() the signature will be incorrect. When requesting an zone transfer (almost all TSIG usage is when requesting zone transfers), with TSIG, this is the basic use pattern. In this example we request an AXFR for miek.nl. with TSIG key named "axfr." and secret "so6ZGir4GPAqINNh9U5c3A==" and using the server 176.58.119.54: You can now read the records from the transfer as they come in. Each envelope is checked with TSIG. If something is not correct an error is returned. Basic use pattern validating and replying to a message that has TSIG set. RFC 6895 sets aside a range of type codes for private use. This range is 65,280 - 65,534 (0xFF00 - 0xFFFE). When experimenting with new Resource Records these can be used, before requesting an official type code from IANA. See https://miek.nl/2014/September/21/idn-and-private-rr-in-go-dns/ for more information. EDNS0 is an extension mechanism for the DNS defined in RFC 2671 and updated by RFC 6891. It defines an new RR type, the OPT RR, which is then completely abused. Basic use pattern for creating an (empty) OPT RR: The rdata of an OPT RR consists out of a slice of EDNS0 (RFC 6891) interfaces. Currently only a few have been standardized: EDNS0_NSID (RFC 5001) and EDNS0_SUBNET (draft-vandergaast-edns-client-subnet-02). Note that these options may be combined in an OPT RR. Basic use pattern for a server to check if (and which) options are set: SIG(0) From RFC 2931: It works like TSIG, except that SIG(0) uses public key cryptography, instead of the shared secret approach in TSIG. Supported algorithms: DSA, ECDSAP256SHA256, ECDSAP384SHA384, RSASHA1, RSASHA256 and RSASHA512. Signing subsequent messages in multi-message sessions is not implemented.
Command goat provides an implementation of a BitTorrent tracker, written in Go. goat can be built using Go 1.1+. It can be downloaded, built, and installed, simply by running: In addition, goat depends on a MySQL server for data storage. After creating a database and user for goat, its database schema may be imported from the SQL files located in 'res/'. goat will not run unless MySQL is installed, and a database and user are properly configured for its use. Optionally, goat can be built to use ql (https://github.com/cznic/ql) as its storage backend. This is done by supplying the 'ql' tag in the go get command: A blank ql database file is located under 'res/ql/goat.db', and will be copied to '~/.config/goat/goat.db' on UNIX systems. goat is now able to use ql as its storage backend, for those who do not wish to use an external, MySQL backend. goat is capable of listening for torrent traffic in three modes: HTTP, HTTPS, and UDP. HTTP/HTTPS are the recommended methods, and are required in order for goat to serve its API, and to allow use of private tracker passkeys. HTTP is considered the standard mode of operation for goat. HTTP allows gathering a great number of metrics, use of passkeys, use of a client whitelist, and access to goat's RESTful API, when configured. For most trackers, this will be the only listener which is necessary in order for goat to function properly. The HTTPS listener provides a method to encrypt traffic to the tracker, but must be used with caution. Unless the SSL certificate in use is signed by a proper certificate authority, it will distress most clients, and they may outright refuse to announce to it. If you are in possession of a certificate signed by a certificate authority, this mode may be more ideal, as it provides added security for your clients. The UDP listener is the most unusual method of the three, and should only be used for public trackers. The BitTorrent UDP tracker protocol specifies a very specific packet format, meaning that additional information or parameters cannot be packed into a UDP datagram in a standard way. The UDP tracker may be the fastest and least bandwidth-intensive, but as stated, should only be used for public trackers. A new feature goat added to goat in order to allow better interoperability with many languages is a RESTful API, which is served using the HTTP or HTTPS listeners. This API enables easy retrieval of tracker statistics, while allowing goat to run as a completely independent process. It should be noted that the API is only enabled when configured, and when a HTTP or HTTPS listener is enabled. Without a transport mechanism, the API will be inaccessible. The API features several modes of authentication, including HTTP Basic for login and HMAC-SHA1 other calls. Upon logging into the API using HTTP Basic with a username and password pair, an API public key and secret will be generated. The public key is used as the username for HTTP Basic authentication, and the secret key is used to calculate a HMAC-SHA1 signature for the password. As part of API signature generation, a random nonce value must be generated and added to the request. It is added to the password portion of the HTTP Basic request, and also to the string which is used to create the signature. Nonce values must be changed on every request, or the request will fail. The current pseudocode format of the HMAC-SHA1 signature is as follows: The proper format for a HTTP Basic request is as follows: When the public key, nonce, and API signature are sent via HTTP Basic, the server will verify the signature. Successful authentication will allow access to the API. This list contains all API calls currently recognized by goat. Each call must be authenticated using the aforementioned methods. Request an API public key and secret key for this user. The public key, user ID, and secret key are used to authenticate further API calls. The expire time indicates when this key is set to expire. Further API calls will extend the expiration time. Retrieve a list of all files tracked by goat. Some extended attributes are not added to reduce strain on database, and to provide a more general overview. Retrieve extended attributes about a specific file with matching ID. This provides counts for number of completions, seeders, leechers, and a list of fileUser relationships associated with a given file. Retrieve a variety of metrics about the current status of goat, including its PID, hostname, memory usage, number of HTTP/UDP hits, etc. Create a user with the specified username, password, and torrent limit. Reterieve a list of all users registered to goat, including their ID, torrent limit, and username. Retrieve information about a single user with matching ID, including their ID, torrent limit, and username. goat is configured using a JSON file, which will be created under '~/.config/goat/config.json' on UNIX systems. Here is an example configuration, describing the settings available to the user.
Package goBolt implements drivers for the Neo4J Bolt Protocol Versions 1-4. There are some limitations to the types of collections the internalDriver supports. Specifically, maps should always be of type map[string]interface{} and lists should always be of type []interface{}. It doesn't seem that the Bolt protocol supports uint64 either, so the biggest number it can send right now is the int64 max. The URL format is: `bolt://(user):(password)@(host):(port)` Schema must be `bolt`. User and password is only necessary if you are authenticating. TLS is supported by using query parameters on the connection string, like so: `bolt://host:port?tls=true&tls_no_verify=false` The supported query params are: * timeout - the number of seconds to set the connection timeout to. Defaults to 60 seconds. * tls - Set to 'true' or '1' if you want to use TLS encryption * tls_no_verify - Set to 'true' or '1' if you want to accept any server certificate (for testing, not secure) * tls_ca_cert_file - path to a custom ca cert for a self-signed TLS cert * tls_cert_file - path to a cert file for this client (need to verify this is processed by Neo4j) * tls_key_file - path to a key file for this client (need to verify this is processed by Neo4j) Errors returned from the API support wrapping, so if you receive an error from the library, it might be wrapping other errors. You can get the innermost error by using the `InnerMost` method. Failure messages from Neo4J are reported, along with their metadata, as an error. In order to get the failure message metadata from a wrapped error, you can do so by calling `err.(*errors.Error).InnerMost().(messages.FailureMessage).Metadata` If there is an error with the database connection, you should get a sql/internalDriver ErrBadConn as per the best practice recommendations of the Golang SQL Driver. However, this error may be wrapped, so you might have to call `InnerMost` to get it, as specified above.
Package sasl implements the Simple Authentication and Security Layer (SASL) as defined by RFC 4422. Most users of this package will only need to create a Negotiator using NewClient or NewServer and call its Step method repeatedly. Authors implementing SASL mechanisms other than the builtin ones will want to create a Mechanism struct which will likely use the other methods on the Negotiator. Be advised: This API is still unstable and is subject to change.
Package graphql-go-tools is library to create GraphQL services using the go programming language. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Source: https://graphql.org This library is intended to be a set of low level building blocks to write high performance and secure GraphQL applications. Use cases could range from writing layer seven GraphQL proxies, firewalls, caches etc.. You would usually not use this library to write a GraphQL server yourself but to build tools for the GraphQL ecosystem. To achieve this goal the library has zero dependencies at its core functionality. It has a full implementation of the GraphQL AST and supports lexing, parsing, validation, normalization, introspection, query planning as well as query execution etc. With the execution package it's possible to write a fully functional GraphQL server that is capable to mediate between various protocols and formats. In it's current state you can use the following DataSources to resolve fields: - Static data (embed static data into a schema to extend a field in a simple way) - HTTP JSON APIs (combine multiple Restful APIs into one single GraphQL Endpoint, nesting is possible) - GraphQL APIs (you can combine multiple GraphQL APIs into one single GraphQL Endpoint, nesting is possible) - Webassembly/WASM Lambdas (e.g. resolve a field using a Rust lambda) If you're looking for a ready to use solution that has all this functionality packaged as a Gateway have a look at: https://wundergraph.com Created by Jens Neuse
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package dns implements a full featured interface to the Domain Name System. Both server- and client-side programming is supported. The package allows complete control over what is sent out to the DNS. The API follows the less-is-more principle, by presenting a small, clean interface. It supports (asynchronous) querying/replying, incoming/outgoing zone transfers, TSIG, EDNS0, dynamic updates, notifies and DNSSEC validation/signing. Note that domain names MUST be fully qualified before sending them, unqualified names in a message will result in a packing failure. Resource records are native types. They are not stored in wire format. Basic usage pattern for creating a new resource record: Or directly from a string: Or when the default origin (.) and TTL (3600) and class (IN) suit you: Or even: In the DNS messages are exchanged, these messages contain resource records (sets). Use pattern for creating a message: Or when not certain if the domain name is fully qualified: The message m is now a message with the question section set to ask the MX records for the miek.nl. zone. The following is slightly more verbose, but more flexible: After creating a message it can be sent. Basic use pattern for synchronous querying the DNS at a server configured on 127.0.0.1 and port 53: Suppressing multiple outstanding queries (with the same question, type and class) is as easy as setting: More advanced options are available using a net.Dialer and the corresponding API. For example it is possible to set a timeout, or to specify a source IP address and port to use for the connection: If these "advanced" features are not needed, a simple UDP query can be sent, with: When this functions returns you will get dns message. A dns message consists out of four sections. The question section: in.Question, the answer section: in.Answer, the authority section: in.Ns and the additional section: in.Extra. Each of these sections (except the Question section) contain a []RR. Basic use pattern for accessing the rdata of a TXT RR as the first RR in the Answer section: Both domain names and TXT character strings are converted to presentation form both when unpacked and when converted to strings. For TXT character strings, tabs, carriage returns and line feeds will be converted to \t, \r and \n respectively. Back slashes and quotations marks will be escaped. Bytes below 32 and above 127 will be converted to \DDD form. For domain names, in addition to the above rules brackets, periods, spaces, semicolons and the at symbol are escaped. DNSSEC (DNS Security Extension) adds a layer of security to the DNS. It uses public key cryptography to sign resource records. The public keys are stored in DNSKEY records and the signatures in RRSIG records. Requesting DNSSEC information for a zone is done by adding the DO (DNSSEC OK) bit to a request. Signature generation, signature verification and key generation are all supported. Dynamic updates reuses the DNS message format, but renames three of the sections. Question is Zone, Answer is Prerequisite, Authority is Update, only the Additional is not renamed. See RFC 2136 for the gory details. You can set a rather complex set of rules for the existence of absence of certain resource records or names in a zone to specify if resource records should be added or removed. The table from RFC 2136 supplemented with the Go DNS function shows which functions exist to specify the prerequisites. The prerequisite section can also be left empty. If you have decided on the prerequisites you can tell what RRs should be added or deleted. The next table shows the options you have and what functions to call. An TSIG or transaction signature adds a HMAC TSIG record to each message sent. The supported algorithms include: HmacMD5, HmacSHA1, HmacSHA256 and HmacSHA512. Basic use pattern when querying with a TSIG name "axfr." (note that these key names must be fully qualified - as they are domain names) and the base64 secret "so6ZGir4GPAqINNh9U5c3A==": If an incoming message contains a TSIG record it MUST be the last record in the additional section (RFC2845 3.2). This means that you should make the call to SetTsig last, right before executing the query. If you make any changes to the RRset after calling SetTsig() the signature will be incorrect. When requesting an zone transfer (almost all TSIG usage is when requesting zone transfers), with TSIG, this is the basic use pattern. In this example we request an AXFR for miek.nl. with TSIG key named "axfr." and secret "so6ZGir4GPAqINNh9U5c3A==" and using the server 176.58.119.54: You can now read the records from the transfer as they come in. Each envelope is checked with TSIG. If something is not correct an error is returned. Basic use pattern validating and replying to a message that has TSIG set. RFC 6895 sets aside a range of type codes for private use. This range is 65,280 - 65,534 (0xFF00 - 0xFFFE). When experimenting with new Resource Records these can be used, before requesting an official type code from IANA. See https://miek.nl/2014/September/21/idn-and-private-rr-in-go-dns/ for more information. EDNS0 is an extension mechanism for the DNS defined in RFC 2671 and updated by RFC 6891. It defines an new RR type, the OPT RR, which is then completely abused. Basic use pattern for creating an (empty) OPT RR: The rdata of an OPT RR consists out of a slice of EDNS0 (RFC 6891) interfaces. Currently only a few have been standardized: EDNS0_NSID (RFC 5001) and EDNS0_SUBNET (draft-vandergaast-edns-client-subnet-02). Note that these options may be combined in an OPT RR. Basic use pattern for a server to check if (and which) options are set: SIG(0) From RFC 2931: It works like TSIG, except that SIG(0) uses public key cryptography, instead of the shared secret approach in TSIG. Supported algorithms: DSA, ECDSAP256SHA256, ECDSAP384SHA384, RSASHA1, RSASHA256 and RSASHA512. Signing subsequent messages in multi-message sessions is not implemented.
Package pulsedav provides a WebDAV server with S3-compatible storage backend. The server supports basic WebDAV operations with a focus on file uploads to S3. File uploads (PUT requests) are stored in the S3 bucket under the path "incoming/{userID}/{filename}". Other WebDAV operations are not supported. Server Configuration: Usage: Required environment variables: Optional environment variables: Security Features: Authentication: The server uses Basic Authentication and validates credentials against an external authentication API. The API must accept POST requests with username/password and return a user ID that will be used in the S3 path structure. WebDAV Operations:
Package uma extends the OpenAPI spec to describe UMA resources and scopes in an API. It can generate an http middleware from an OpenAPI spec in order to enforce UMA permissions. 1. Define resource types An UMA resource type contains fields that tend to stay the same for resources of the same type. Define resource types by adding `x-uma-resource-types` section to the root level of the spec e.g. 2. Enable UMA in security schemes UMA access control only work with oauth2 or openIdConnect security scheme. To enable, add the key `x-uma-enabled` to the security scheme: 3. Define resources UMA resources can be defined at the root level or at individual paths. Root level resource represents the default resource at all paths. Path level resource overrides root-level resource. e.g. Each resource object has 2 keys: type and name. Type must be one of the types defined earlier. Name is the name template string that can be rendered using path parameters. 4. Define scopes UMA scopes are simply oauth2 and openIdConnect scopes. They will work as UMA scopes as long as the security scheme is uma-enabled: 5. Generate code This package can generate relevant go code from the OpenAPI spec: The generated code will work with any server compatible with the net/http package. It can works by itself or works along side other generated codes such as those generated by github.com/deepmap/oapi-codegen 6. Use the generated code
Package ecr provides the client and types for making API requests to Amazon EC2 Container Registry. Amazon Elastic Container Registry (Amazon ECR) is a managed Docker registry service. Customers can use the familiar Docker CLI to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry. Amazon ECR supports private Docker repositories with resource-based permissions using IAM so that specific users or Amazon EC2 instances can access repositories and images. Developers can use the Docker CLI to author and manage images. See https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21 for more information on this service. See ecr package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/ecr/ To Amazon EC2 Container Registry with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon EC2 Container Registry client ECR for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/ecr/#New
Package ovirtclient provides a human-friendly Go client for the oVirt Engine. It provides an abstraction layer for the oVirt API, as well as a mocking facility for testing purposes. This documentation contains two parts. This introduction explains setting up the client with the credentials. The API doc contains the individual API calls. When reading the API doc, start with the Client interface: it contains all components of the API. The individual API's, their documentation and examples are located in subinterfaces, such as DiskClient. There are several ways to create a client instance. The most basic way is to use the New() function as follows: The mock client simulates the oVirt engine behavior in-memory without needing an actual running engine. This is a good way to provide a testing facility. It can be created using the NewMock method: That's it! However, to make it really useful, you will need the test helper which can set up test fixtures. The test helper can work in two ways: Either it sets up test fixtures in the mock client, or it sets up a live connection and identifies a usable storage domain, cluster, etc. for testing purposes. The ovirtclient.NewMockTestHelper() function can be used to create a test helper with a mock client in the backend: The easiest way to set up the test helper for a live connection is by using environment variables. To do that, you can use the ovirtclient.NewLiveTestHelperFromEnv() function: This function will inspect environment variables to determine if a connection to a live oVirt engine can be established. The following environment variables are supported: URL of the oVirt engine API. Mandatory. The username for the oVirt engine. Mandatory. The password for the oVirt engine. Mandatory. A file containing the CA certificate in PEM format. Provide the CA certificate in PEM format directly. Disable certificate verification if set. Not recommended. The cluster to use for testing. Will be automatically chosen if not provided. ID of the blank template. Will be automatically chosen if not provided. Storage domain to use for testing. Will be automatically chosen if not provided. VNIC profile to use for testing. Will be automatically chosen if not provided. You can also create the test helper manually: This library provides extensive logging. Each API interaction is logged on the debug level, and other messages are added on other levels. In order to provide logging this library uses the go-ovirt-client-log (https://github.com/oVirt/go-ovirt-client-log) interface definition. As long as your logger implements this interface, you will be able to receive log messages. The logging library also provides a few built-in loggers. For example, you can log via the default Go log interface: Or, you can also log in tests: You can also disable logging: Finally, we also provide an adapter library for klog here: https://github.com/oVirt/go-ovirt-client-log-klog Modern-day oVirt engines run secured with TLS. This means that the client needs a way to verify the certificate the server is presenting. This is controlled by the tls parameter of the New() function. You can implement your own source by implementing the TLSProvider interface, but the package also includes a ready-to-use provider. Create the provider using the TLS() function: This provider has several functions. The easiest to set up is using the system trust root for certificates. However, this won't work own Windows: Now you need to add your oVirt engine certificate to your system trust root. If you don't want to, or can't add the certificate to the system trust root, you can also directly provide it to the client. Finally, you can also disable certificate verification. Do we need to say that this is a very, very bad idea? The configured tls variable can then be passed to the New() function to create an oVirt client. This library attempts to retry API calls that can be retried if possible. Each function has a sensible retry policy. However, you may want to customize the retries by passing one or more retry flags. The following retry flags are supported: This strategy will stop retries when the context parameter is canceled. This strategy adds a wait time after each time, which is increased by the given factor on each try. The default is a backoff with a factor of 2. This strategy will cancel retries if the error in question is a permanent error. This is enabled by default. This strategy will abort retries if a maximum number of tries is reached. On complex calls the retries are counted per underlying API call. This strategy will abort retries if a certain time has been elapsed for the higher level call. This strategy will abort retries if a certain underlying API call takes longer than the specified duration.
Package securityir provides the API client, operations, and parameter types for Security Incident Response. This guide provides documents the action and response elements for customer use of the service.
Package validation will ensure data integrity by enforcing rules about data formats, required fields, and other constraints. It aims to improve API security, provide better user experience through informative error messages, and enhance overall code maintainability by offering a structured approach to validation logic.
The Escher HTTP request signing framework is intended to provide a secure way for clients to sign HTTP requests, and servers to check the integrity of these messages. The goal of the protocol is to introduce an authentication solution for REST API services, that is more secure than the currently available protocols. RFC 2617 (HTTP Authentication) defines Basic and Digest Access Authentication. They’re widely used, but Basic Access Authentication doesn’t encrypt the secret and doesn’t add integrity checks to the requests. Digest Access Authentication sends the secret encrypted, but the algorithm with creating a checksum with a nonce and using md5 should not be considered highly secure these days, and as with Basic Access Authentication, there’s no way to check the integrity of the message. RFC 6749 (OAuth 2.0 Authorization) enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf. This is not helpful for a machine-to-machine communication situation, like a REST API authentication, because typically there’s no third-party user involved. Additionally, after a token is obtained from the authorization endpoint, it is used with no encryption, and doesn’t provide integration checking, or prevent repeating messages. OAuth 2.0 is a stateful protocol which needs a database to store the tokens for client sessions. Amazon and other service providers created protocols addressing these issues, however there is no public standard with open source implementations available from them. As Escher is based on a publicly documented, widely, in-the-wild used protocol, the specification does not include novelty techniques. 2. Signing an HTTP Request Escher defines a stateless signature generation mechanism. The signature is calculated from the key parts of the HTTP request, a service identifier string called credential scope, and a client key and client secret. The signature generation steps are: canonicalizing the request, creating a string to calculate the signature, and adding the signature to the original request. Escher supports two hash algorithms: SHA256 and SHA512 designed by the NSA (U.S. National Security Agency). 2.1. Canonicalizing the Request In order to calculate a checksum from the key HTTP request parts, the HTTP request method, the request URI, the query parts, the headers, and the request body have to be canonicalized. The output of the canonicalization step will be a string including the request parts separated by LF (line feed, “n”) characters. The string will be used to calculate a checksum for the request. 2.1.1. The HTTP method The HTTP method defined by RFC2616 (Hypertext Transfer Protocol) is case sensitive, and must be available in upper case, no transformation has to be applied: POST 2.1.2. The Path The path is the absolute path of the URL. Starts with a slash (/) character, and does not include the query part (and the question mark). Escher follows the rules defined by RFC3986 (Uniform Resource Identifier) to normalize the path. Basically it means: Convert relative paths to absolute, remove redundant path components. URI-encode each path components: the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty paths to /. For example: 2.1.3. The Query String RFC3986 (Uniform Resource Identifier) should provide guidance for canonicalization of the query string, but here’s the complete list of the rules to be applied: URI-encode each query parameter names and values the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty query strings to empty string. Sort query parameters by the encoded parameter names (ASCII order). Do not shorten parameter values if their parameter name is the same (key=B&key=A is a valid output), the order of parameters in a URL may be significant (this is not defined by the HTTP standard). Separate parameter names and values by = signs, include = for empty values, too Separate parameters by & For example: To canonicalize the headers, the following rules have to be followed: Lower case the header names. Separate header names and values by a :, with no spaces. Sort header names to alphabetical order (ASCII). Group headers with the same names into a header, and separate their values by commas, without sorting. Trim header values, keep all the spaces between quote characters ("). For example: 2.1.5. Signed Headers The list of headers to include when calculating the signature. Lower cased value of header names, separated by ;, like this: date;host 2.1.6. Body Checksum A checksum for the request body, aka the payload has to be calculated. Escher supports SHA-256 and SHA-512 algorithms for checksum calculation. If the request contains no body, an empty string has to be used as the input for the hash algorithm. The selected algorithm will be added later to the authorization header, so the server will be able to use the same algorithm for validation. The checksum of the body has to be presented as a lower cased hexadecimal string, for example: 2.1.7. Concatenating the Canonicalized Parts All the steps above produce a row of data, except the headers canonicalization, as it creates one row per headers. These have to be concatenated with LF (line feed, “n”) characters into a string. An example: 2.2. Creating the Signature The next step is creating another string which will be directly used to calculate the signature. 2.2.1. Algorithm ID The algorithm ID comes from the algo_prefix (default value is ESR) and the algorithm used to calculate checksums during the signing process. The string algo_prefix, “HMAC”, and the algorithm name should be concatenated with dashes, like this: 2.2.2. Long Date The long date is the request date in the ISO 8601 basic format, like YYYYMMDD + T + HHMMSS + Z. Note that the basic format uses no punctuation. Example is: This date has to be added later, too, as a date header (default header name is X-Escher-Date). 2.2.3. Date and Credential Scope Next information is the short date, and the credential scope concatenated with a / character. The short date is the request date’s date part, an ISO 8601 basic formatted representation, the credential scope is defined by the service. Example: This will be added later, too, as part of the authorization header (default header name is X-Escher-Auth). 2.2.4. Checksum of the Canonicalized Request Take the output of step 2.1.7., and create a checksum from the canonicalized checksum string. This checksum has to be represented as a lower cased hexadecimal string, too. Something like this will be an output: 2.2.5. Concatenating the Signing String Concatenate the outputs of steps 2.2. with LF characters. Example output: 2.2.6. The Signing Key The signing key is based on the algo_prefix, the client secret, the parts of the credential scope, and the request date. Take the algo_prefix, concatenate the client secret to it. First apply the HMAC algorithm to the request date, then apply the actual value on each of the credential scope parts (splitted at /). The end result is a binary signing key. Pseudo code: 2.2.7. Create the Signature The signature is created from the output of steps 2.2.5. (Signing String) and 2.2.6. (Signing Key). With the selected algorithm, create a checksum. It has to be represented as a lower cased hexadecimal string. Something like this will be an output: 2.3. Adding the Signature to the Request The final step of the Escher signing process is adding the Signature to the request. Escher adds a new header to the request, by default, the header name is X-Escher-Auth. The header value will include the algorithm ID (see 2.2.1.), the client key, the short date and the credential scope (see 2.2.3.), the signed headers string (see 2.1.5.) and finally the signature (see 2.2.7.). The values of this inputs have to be concatenated like this: 3. Presigning a URL The URL presigning process is very similar to the request signing procedure. But for a URL, there are no headers, no request body, so the calculation of the Signature is different. Also, the Signature cannot be added to the headers, but is included as query parameters. A significant difference is that the presigning allows defining an expiration time. By default, it is 86400 secs, 24 hours. The current time and the expiration time will be included in the URL, and the server has to check if the URL is expired. 3.1. Canonicalizing the URL to Presign The canonicalization for URL presigning is the same process as for HTTP requests, in this section we will cover the differences only. 3.1.1. The HTTP method The HTTP method for presigned URLs is fixed to: For example: 3.1.3. The Query String The query is coming from the URL, but the algorithm, credentials, date, expiration time, and signed headers have to be added to the query parts. 3.1.4. The Headers A URL has no headers, Escher creates the Host header based on the URL’s domain information, and adds it to the canonicalized request. For example: 3.1.5. Signed Headers It will be host, as that’s the only header included. Example:
Package iam provides the client and types for making API requests to AWS Identity and Access Management. AWS Identity and Access Management (IAM) is a web service that you can use to manage users and user permissions under your AWS account. This guide provides descriptions of IAM actions that you can call programmatically. For general information about IAM, see AWS Identity and Access Management (IAM) (http://aws.amazon.com/iam/). For the user guide for IAM, see Using IAM (http://docs.aws.amazon.com/IAM/latest/UserGuide/). AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take care of tasks such as cryptographically signing requests (see below), managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services (http://aws.amazon.com/tools/) page. We recommend that you use the AWS SDKs to make programmatic API calls to IAM. However, you can also use the IAM Query API to make direct calls to the IAM web service. To learn more about the IAM Query API, see Making Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html) in the Using IAM guide. IAM supports GET and POST requests for all actions. That is, the API does not require you to use GET for some actions and POST for others. However, GET requests are subject to the limitation size of a URL. Therefore, for operations that require larger sizes, use a POST request. Requests must be signed using an access key ID and a secret access key. We strongly recommend that you do not use your AWS account access key ID and secret access key for everyday work with IAM. You can use the access key ID and secret access key for an IAM user or you can use the AWS Security Token Service to generate temporary security credentials and use those to sign requests. To sign requests, we recommend that you use Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). If you have an existing application that uses Signature Version 2, you do not have to update it to use Signature Version 4. However, some operations now require Signature Version 4. The documentation for operations that require version 4 indicate this requirement. For more information, see the following: AWS Security Credentials (http://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html). This topic provides general information about the types of credentials used for accessing AWS. IAM Best Practices (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPractices.html). This topic presents a list of suggestions for using the IAM service to help secure your AWS resources. Signing AWS API Requests (http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html). This set of topics walk you through the process of signing a request using an access key ID and secret access key. See https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08 for more information on this service. See iam package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/iam/ To AWS Identity and Access Management with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Identity and Access Management client IAM for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/iam/#New
Package egoscale is a mapping for with the CloudStack API (http://cloudstack.apache.org/api.html) from Go. It has been designed against the Exoscale (https://www.exoscale.com/) infrastructure but should fit other CloudStack services. To build a request, construct the adequate struct. This library expects a pointer for efficiency reasons only. The response is a struct corresponding to the data at stake. E.g. DeployVirtualMachine gives a VirtualMachine, as a pointer as well to avoid big copies. Then everything within the struct is not a pointer. Find below some examples of how egoscale may be used to interact with a CloudStack endpoint, especially Exoscale itself. If anything feels odd or unclear, please let us know: https://github.com/exoscale/egoscale/issues This example deploys a virtual machine while controlling the job status as it goes. It enables a finer control over errors, e.g. HTTP timeout, and eventually a way to kill it of (from the client side). As this library is mostly an HTTP client, you can reuse all the existing tools around it. Nota bene: when running the tests or the egoscale library via another tool, e.g. the exo cli (or the cs cli), the environment variable EXOSCALE_TRACE=prefix does the above configuration for you. As a developer using egoscale as a library, you'll find it more convenient to plug your favorite io.Writer as it's Logger. All the available APIs on the server and provided by the API Discovery plugin Security Groups provide a way to isolate traffic to VMs. Rules are added via the two Authorization commands. Security Group also implement the generic List, Get and Delete interfaces (Listable, Gettable and Deletable). See: http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/stable/networking_and_traffic.html#security-groups A Zone corresponds to a Data Center. You may list them. Zone implements the Listable interface, which let you perform a list in two different ways. The first exposes the underlying CloudStack request while the second one hide them and you only manipulate the structs of your interest. An Elastic IP is a way to attach an IP address to many Virtual Machines. The API side of the story configures the external environment, like the routing. Some work is required within the machine to properly configure the interfaces. See: http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/networking_and_traffic.html#about-elastic-ips
Package ssh implements an SSH client and server. SSH is a transport security protocol, an authentication protocol and a family of application protocols. The most typical application level protocol is a remote shell and this is specifically implemented. However, the multiplexed nature of SSH is exposed to users that wish to support others. References: This package does not fall under the stability promise of the Go language itself, so its API may be changed when pressing needs arise.
Package sasl implements the Simple Authentication and Security Layer (SASL) as defined by RFC 4422. Most users of this package will only need to create a Negotiator using NewClient or NewServer and call its Step method repeatedly. Authors implementing SASL mechanisms other than the builtin ones will want to create a Mechanism struct which will likely use the other methods on the Negotiator. Be advised: This API is still unstable and is subject to change.
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package firecracker provides a library to interact with the Firecracker API. Firecracker is an open-source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services. See https://firecracker-microvm.github.io/ for more details. This library requires Go 1.18 or later and can be used with Go modules. BUG(aws): There are some Firecracker features that are not yet supported by the SDK. These are tracked as GitHub issues with the firecracker-feature label: https://github.com/verloop/firecracker-go-sdk/issues?q=is%3Aissue+is%3Aopen+label%3Afirecracker-feature This library is licensed under the Apache 2.0 License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Package dns implements a full featured interface to the Domain Name System. Both server- and client-side programming is supported. The package allows complete control over what is sent out to the DNS. The API follows the less-is-more principle, by presenting a small, clean interface. It supports (asynchronous) querying/replying, incoming/outgoing zone transfers, TSIG, EDNS0, dynamic updates, notifies and DNSSEC validation/signing. Note that domain names MUST be fully qualified before sending them, unqualified names in a message will result in a packing failure. Resource records are native types. They are not stored in wire format. Basic usage pattern for creating a new resource record: Or directly from a string: Or when the default origin (.) and TTL (3600) and class (IN) suit you: Or even: In the DNS messages are exchanged, these messages contain resource records (sets). Use pattern for creating a message: Or when not certain if the domain name is fully qualified: The message m is now a message with the question section set to ask the MX records for the miek.nl. zone. The following is slightly more verbose, but more flexible: After creating a message it can be sent. Basic use pattern for synchronous querying the DNS at a server configured on 127.0.0.1 and port 53: Suppressing multiple outstanding queries (with the same question, type and class) is as easy as setting: More advanced options are available using a net.Dialer and the corresponding API. For example it is possible to set a timeout, or to specify a source IP address and port to use for the connection: If these "advanced" features are not needed, a simple UDP query can be sent, with: When this functions returns you will get dns message. A dns message consists out of four sections. The question section: in.Question, the answer section: in.Answer, the authority section: in.Ns and the additional section: in.Extra. Each of these sections (except the Question section) contain a []RR. Basic use pattern for accessing the rdata of a TXT RR as the first RR in the Answer section: Both domain names and TXT character strings are converted to presentation form both when unpacked and when converted to strings. For TXT character strings, tabs, carriage returns and line feeds will be converted to \t, \r and \n respectively. Back slashes and quotations marks will be escaped. Bytes below 32 and above 127 will be converted to \DDD form. For domain names, in addition to the above rules brackets, periods, spaces, semicolons and the at symbol are escaped. DNSSEC (DNS Security Extension) adds a layer of security to the DNS. It uses public key cryptography to sign resource records. The public keys are stored in DNSKEY records and the signatures in RRSIG records. Requesting DNSSEC information for a zone is done by adding the DO (DNSSEC OK) bit to a request. Signature generation, signature verification and key generation are all supported. Dynamic updates reuses the DNS message format, but renames three of the sections. Question is Zone, Answer is Prerequisite, Authority is Update, only the Additional is not renamed. See RFC 2136 for the gory details. You can set a rather complex set of rules for the existence of absence of certain resource records or names in a zone to specify if resource records should be added or removed. The table from RFC 2136 supplemented with the Go DNS function shows which functions exist to specify the prerequisites. The prerequisite section can also be left empty. If you have decided on the prerequisites you can tell what RRs should be added or deleted. The next table shows the options you have and what functions to call. An TSIG or transaction signature adds a HMAC TSIG record to each message sent. The supported algorithms include: HmacMD5, HmacSHA1, HmacSHA256 and HmacSHA512. Basic use pattern when querying with a TSIG name "axfr." (note that these key names must be fully qualified - as they are domain names) and the base64 secret "so6ZGir4GPAqINNh9U5c3A==": If an incoming message contains a TSIG record it MUST be the last record in the additional section (RFC2845 3.2). This means that you should make the call to SetTsig last, right before executing the query. If you make any changes to the RRset after calling SetTsig() the signature will be incorrect. When requesting an zone transfer (almost all TSIG usage is when requesting zone transfers), with TSIG, this is the basic use pattern. In this example we request an AXFR for miek.nl. with TSIG key named "axfr." and secret "so6ZGir4GPAqINNh9U5c3A==" and using the server 176.58.119.54: You can now read the records from the transfer as they come in. Each envelope is checked with TSIG. If something is not correct an error is returned. Basic use pattern validating and replying to a message that has TSIG set. RFC 6895 sets aside a range of type codes for private use. This range is 65,280 - 65,534 (0xFF00 - 0xFFFE). When experimenting with new Resource Records these can be used, before requesting an official type code from IANA. See https://miek.nl/2014/September/21/idn-and-private-rr-in-go-dns/ for more information. EDNS0 is an extension mechanism for the DNS defined in RFC 2671 and updated by RFC 6891. It defines an new RR type, the OPT RR, which is then completely abused. Basic use pattern for creating an (empty) OPT RR: The rdata of an OPT RR consists out of a slice of EDNS0 (RFC 6891) interfaces. Currently only a few have been standardized: EDNS0_NSID (RFC 5001) and EDNS0_SUBNET (draft-vandergaast-edns-client-subnet-02). Note that these options may be combined in an OPT RR. Basic use pattern for a server to check if (and which) options are set: SIG(0) From RFC 2931: It works like TSIG, except that SIG(0) uses public key cryptography, instead of the shared secret approach in TSIG. Supported algorithms: DSA, ECDSAP256SHA256, ECDSAP384SHA384, RSASHA1, RSASHA256 and RSASHA512. Signing subsequent messages in multi-message sessions is not implemented.
Package memguard implements a secure software enclave for the storage of sensitive information in memory. There are two main container objects exposed in this API. Enclave objects encrypt data and store the ciphertext whereas LockedBuffers are more like guarded memory allocations. There is a limit on the maximum number of LockedBuffer objects that can exist at any one time, imposed by the system's mlock limits. There is no limit on Enclaves. The general workflow is to store sensitive information in Enclaves when it is not immediately needed and decrypt it when and where it is. After use, the LockedBuffer should be destroyed. If you need access to the data inside a LockedBuffer in a type not covered by any methods provided by this API, you can type-cast the allocation's memory to whatever type you want. This is of course an unsafe operation and so care must be taken to ensure that the cast is valid and does not result in memory unsafety. Further examples of code and interesting use-cases can be found in the examples subpackage. Several functions exist to make the mass purging of data very easy. It is recommended to make use of them when appropriate. Core dumps are disabled by default. If you absolutely require them, you can enable them by using unix.Setrlimit to set RLIMIT_CORE to an appropriate value.