Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation. To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases. The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the SyncProducer can still sometimes be lost. To consume messages, use the Consumer. Note that Sarama's Consumer implementation does not currently support automatic consumer-group rebalancing and offset tracking. For Zookeeper-based tracking (Kafka 0.8.2 and earlier), the https://github.com/wvanbergen/kafka library builds on Sarama to add this support. For Kafka-based tracking (Kafka 0.9 and later), the https://github.com/bsm/sarama-cluster library builds on Sarama to add this support. For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire; the Client provides higher-level metadata management that is shared between the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry. Broker related metrics: Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics. Producer related metrics:
Package libvirt-go-xml defines structs for parsing libvirt XML schemas The libvirt API uses XML schemas/documents to describe the configuration of many of its managed objects. Thus when using the libvirt-go package, it is often neccessary to either parse or format XML documents. This package defines a set of Go structs which have been annotated for use with the encoding/xml API to manage libvirt XML documents. Example creating a domain XML document from configuration: Example parsing a domainXML document, in combination with libvirt-go
Package libvirt-go-xml defines structs for parsing libvirt XML schemas The libvirt API uses XML schemas/documents to describe the configuration of many of its managed objects. Thus when using the libvirt-go package, it is often neccessary to either parse or format XML documents. This package defines a set of Go structs which have been annotated for use with the encoding/xml API to manage libvirt XML documents. Example creating a domain XML document from configuration: Example parsing a domainXML document, in combination with libvirt-go
Package irma contains generic IRMA strucs and logic of use to all IRMA participants. It parses irma_configuration folders to scheme managers, issuers, credential types and public keys; it contains various messages from the IRMA protocol; it parses IRMA metadata attributes; and it contains attribute and credential verification logic.
Package webgo is a lightweight framework for building web apps. It has a multiplexer, middleware plugging mechanism & context management of its own. The primary goal of webgo is to get out of the developer's way as much as possible. i.e. it does not enforce you to build your app in any particular pattern, instead just helps you get all the trivial things done faster and easier. e.g. 1. Getting named URI parameters. 2. Multiplexer for regex matching of URI and such. 3. Inject special app level configurations or any such objects to the request context as required.
Package rollbar is a Golang Rollbar client that makes it easy to report errors to Rollbar with full stacktraces. Basic Usage This package is designed to be used via the functions exposed at the root of the `rollbar` package. These work by managing a single instance of the `Client` type that is configurable via the setter functions at the root of the package. If you wish for more fine grained control over the client or you wish to have multiple independent clients then you can create and manage your own instances of the `Client` type. We provide two implementations of the `Transport` interface, `AsyncTransport` and `SyncTransport`. These manage the communication with the network layer. The Async version uses a buffered channel to communicate with the Rollbar API in a separate go routine. The Sync version is fully synchronous. It is possible to create your own `Transport` and configure a Client to use your preferred implementation. Go does not provide a mechanism for handling all panics automatically, therefore we provide two functions `Wrap` and `WrapAndWait` to make working with panics easier. They both take a function and then report to Rollbar if that function panics. They use the recover mechanism to capture the panic, and therefore if you wish your process to have the normal behaviour on panic (i.e. to crash), you will need to re-panic the result of calling `Wrap`. For example, The above pattern of calling `Wrap(...)` and then `Wait(...)` can be combined via `WrapAndWait(...)`. When `WrapAndWait(...)` returns if there was a panic it has already been sent to the Rollbar API. The error is still returned by this function if there is one. Due to the nature of the `error` type in Go, it can be difficult to attribute errors to their original origin without doing some extra work. To account for this, we define the interface `CauseStacker`: One can implement this interface for custom Error types to be able to build up a chain of stack traces. In order to get stack the correct stacks, callers must call BuildStack on their own at the time that the cause is wrapped. This is the least intrusive mechanism for gathering this information due to the decisions made by the Go runtime to not track this information.
Package entityresolution provides the API client, operations, and parameter types for AWS EntityResolution. Welcome to the Entity Resolution API Reference. Entity Resolution is an Amazon Web Services service that provides pre-configured entity resolution capabilities that enable developers and analysts at advertising and marketing companies to build an accurate and complete view of their consumers. With Entity Resolution, you can match source records containing consumer identifiers, such as name, email address, and phone number. This is true even when these records have incomplete or conflicting identifiers. For example, Entity Resolution can effectively match a source record from a customer relationship management (CRM) system with a source record from a marketing system containing campaign information. To learn more about Entity Resolution concepts, procedures, and best practices, see the Entity Resolution User Guide.
Package elasticache provides the client and types for making API requests to Amazon ElastiCache. Amazon ElastiCache is a web service that makes it easier to set up, operate, and scale a distributed cache in the cloud. With ElastiCache, customers get all of the benefits of a high-performance, in-memory cache with less of the administrative burden involved in launching and managing a distributed cache. The service makes setup, scaling, and cluster failure handling much simpler than in a self-managed cache deployment. In addition, through integration with Amazon CloudWatch, customers get enhanced visibility into the key performance statistics associated with their cache and can receive alarms if a part of their cache runs hot. See https://docs.aws.amazon.com/goto/WebAPI/elasticache-2015-02-02 for more information on this service. See elasticache package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/elasticache/ To Amazon ElastiCache with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon ElastiCache client ElastiCache for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/elasticache/#New
Package session provides a convenient way to store session data (such as a user ID) securely in a web browser cookie or other authentication token. Cookie values generated by this package use modern authenticated encryption, so they can't be inspected or altered by client processes. Most users of this package will use functions Set and Get, which manage cookies directly. An analogous pair of functions, Encode and Decode, help when the session data will be stored somewhere other than a browser cookie; for example, an API token configured by hand in an API client process.
Package configure is an easy to use multi-layer configuration system. Examples can be found in the example folder (http://github.com/paked/configure/blob/master/examples/) as well as a getting started guide in the main README file (http://github.com/paked/configure). configure makes use of Checkers, which are used to retrieve values from their respective data sources. There are three built in Checkers, Environment, Flag and JSON. Environment retrieves environment variables. Flag retrieves variables within the flags of a command. JSON retrieves values from a JSON file/blob. Checkers can be essentially thought of as "middlewear for configuration", in fact parts of the package API was inspired by negroni (https://github.com/codegangsta/negroni, the awesome net/http middlewear manager) and the standard library's flag package. It is very easy to create your own Checkers, all they have to do is satisfy the Checker interface. That is an, Int method, String method and a Bool method. These functions are used to retrieve their respective data types. A setup method is also required, where the Checker should initialize itself and throw any errors. If you do create your own Checkers I would be more than happy to add a link to the README in the github repository.
Package peer provides a common base for creating and managing Bitcoin network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the bitcoin wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol Full duplex reading and writing of bitcoin protocol messages Automatic handling of the initial handshake process including protocol version negotiation Asynchronous message queuing of outbound messages with optional channel for notification when the message is actually sent Flexible peer configuration Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections asthey see fit (proxies, etc) User agent name and version Bitcoin network Service support signalling (full nodes, bloom filters, etc) Maximum supported protocol version Ability to register callbacks for handling bitcoin protocol messages Inventory message batching and send trickling with known inventory detection and avoidance Automatic periodic keep-alive pinging and pong responses Random Nonce generation and self connection detection Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support Disconnects the peer when the protocol version is high enough Does not invoke the related callbacks for older protocol versions Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version Helper functions pushing addresses, getblocks, getheaders, and reject messages These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization Ability to wait for shutdown/disconnect Comprehensive test coverage All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the bitcoin network to use, which services it supports, and callbacks to invoke when bitcoin messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to bitcoin messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported bitcoin messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all BIPS supported by the wire package. (https://godoc.org/github.com/p9c/pod/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Inertia is the command line interface that helps you set up your remote for continuous deployment and allows you to manage your deployment through configuration options and various commands. This document contains basic usage instructions, but a new usage guide is also available here: https://inertia.ubclaunchpad.com/ Inertia can be installed in several ways: Users of other platforms can install the Inertia CLI from the Releases page, found here: https://github.com/ubclaunchpad/inertia/releases/latest To help with usage, most relevant documentation can be seen by using the --help flag on any command: Documentation can also be triggered by simply entering a command without the prerequisite arguments or additional commands: Inertia has two "core" sets of commands - one that primarily handles local configuration, and one that allows you to control your remote VPS instances and their associated deployments. For local configuration, most commands will build off of the root "inertia ..." command. For example, a typical set of commands to set up a project might look like: The other set of commands are based on a remote VPS configuration, and the available commands can be seen by running: In the previous example, the next steps to set up a deployment might be: Some of these commands offer a --stream flag that allows you to view realtime log feedback from the daemon. More documentation on Inertia, how it works, and how to use it can be found in the project repository: https://github.com/ubclaunchpad/inertia/tree/master
Package gourdiantoken provides a JWT-based token management system for access and refresh tokens. Features: - Creation and verification of access and refresh tokens - Configurable token expiration and claims - Support for both symmetric key signing (HMAC) and asymmetric key signing (RSA, ECDSA) - Integration with UUID for token and session IDs - Customizable token claims (e.g., roles, permissions)
Package gofidential provides a simple, flexible, and scalable interface for loading .env files and managing different environments. It exports the gofidential.Load function, which loads a .env file into a struct, and the gofidential.Environment struct, which allows users to configure different environments. This package is ideal for applications of all sizes, from small hobby projects to full-blown enterprise apps, especially applications that require a robust secrets loader that supports environment-specific configurations (such as development, testing, and production), and can also scale seamlessly. It also includes built-in type validation to ensure the environment variables meet the required value criteria. The library itself enforces strict parsing rules which makes the .env files much more predictable and free of bugs. The parsing rules were defined by the standard created in the library itself, the GoFidential Standard v1 (GFSv1), and it is supposed to define a new and more predictable set of rules for creating .env files with all the best practices. Example usage:
Package transfermanager implements the Amazon S3 Transfer Manager, a high-level S3 client library. Package transfermanager is the new iteration of the original feature/s3/manager module implemented for the AWS SDK Go v2. This module is currently in a BETA release state. It is not subject to the same backwards-compatibility guarantees provided by the generally-available (GA) AWS SDK for Go v2. Features may be added or removed without warning, and APIs may break. For the current GA transfer manager for AWS SDK Go v2, see feature/s3/manager. Package transfermanager implements a high-level S3 client with support for the following: The package also exposes several opt-in hooks that configure an http.Transport that may convey performance/reliability enhancements in certain user environments:
Package pgx is a PostgreSQL database driver. pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jackc/pgx/stdlib to use pgx as a database/sql compatible driver. pgx implements Query and Scan in the familiar database/sql style. pgx also implements QueryRow in the same style as database/sql. Use Exec to execute a query that does not return a result set. Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. The connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. It delegates methods such as QueryRow to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control. pgx maps between all common base types directly between Go and PostgreSQL. In particular: pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer. pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly map to native Go types. pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB. pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6. pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgx/pgtype library to support more types. See documention for that library for instructions on how to implement custom types. See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type. pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces. If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attempt to encode the underlying type. While this is usually desired behavior it can produce suprising behavior if one the underlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. It is recommended that this situation be avoided by implementing pgx interfaces on the renamed type. []byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. Transactions are started by calling Begin or BeginEx. The BeginEx variant can create a transaction with a specified isolation level. Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory. CopyFrom can be faster than an insert with as few as 5 rows. pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification. The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection. pgx has never explicitly supported Postgres < 9.6's `ssl_renegotiation` option. As of v3.3.0, it doesn't send `ssl_renegotiation: 0` either to support Redshift (https://github.com/jackc/pgx/pull/476). If you need TLS Renegotiation, consider supplying `ConnConfig.TLSConfig` with a non-zero `Renegotiation` value and if it's not the default on your server, set `ssl_renegotiation` via `ConnConfig.RuntimeParams`. pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, and the testing log are provided in the log directory.
Package pgx is a PostgreSQL database driver. pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jackc/pgx/stdlib to use pgx as a database/sql compatible driver. pgx implements Query and Scan in the familiar database/sql style. pgx also implements QueryRow in the same style as database/sql. Use Exec to execute a query that does not return a result set. Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. The connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. It delegates methods such as QueryRow to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control. pgx maps between all common base types directly between Go and PostgreSQL. In particular: pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer. pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly map to native Go types. pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB. pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6. pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgx/pgtype library to support more types. See documention for that library for instructions on how to implement custom types. See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type. pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces. If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attempt to encode the underlying type. While this is usually desired behavior it can produce suprising behavior if one the underlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. It is recommended that this situation be avoided by implementing pgx interfaces on the renamed type. []byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. Transactions are started by calling Begin or BeginEx. The BeginEx variant can create a transaction with a specified isolation level. Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory. CopyFrom can be faster than an insert with as few as 5 rows. pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification. The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection. pgx has never explicitly supported Postgres < 9.6's `ssl_renegotiation` option. As of v3.3.0, it doesn't send `ssl_renegotiation: 0` either to support Redshift (https://github.com/jackc/pgx/pull/476). If you need TLS Renegotiation, consider supplying `ConnConfig.TLSConfig` with a non-zero `Renegotiation` value and if it's not the default on your server, set `ssl_renegotiation` via `ConnConfig.RuntimeParams`. pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, and the testing log are provided in the log directory.
Package ecr provides the client and types for making API requests to Amazon EC2 Container Registry. Amazon Elastic Container Registry (Amazon ECR) is a managed Docker registry service. Customers can use the familiar Docker CLI to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry. Amazon ECR supports private Docker repositories with resource-based permissions using IAM so that specific users or Amazon EC2 instances can access repositories and images. Developers can use the Docker CLI to author and manage images. See https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21 for more information on this service. See ecr package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/ecr/ To Amazon EC2 Container Registry with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon EC2 Container Registry client ECR for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/ecr/#New
Package juice provides a robust and thread-safe database connection management system. It supports multiple database drivers and connection pooling with configurable parameters. Package juice is a lightweight and efficient SQL mapping framework for Go. It provides a simple way to execute SQL queries and map results to Go structs, with support for both XML-based SQL configurations and raw SQL statements. Basic Usage: Features: For more information and examples, visit: https://github.com/go-juicedev/juice Package juice provides a set of utilities for mapping database query results to Go data structures.
Dnsclay implements a DNS server that translates DNS UPDATE (RFC 2136) and DNS AXFR (RFC 5936, zone transfers) requests to the many custom cloud DNS operator APIs for managing DNS records/zones. Dnsclay keeps a local copy of the records, periodically synchronizes its copy with authoritative data at the cloud DNS operator, and sends DNS NOTIFY (RFC 1996) messages to configured listeners when any records changed. Dnsclay also has a web interface for managing the configured zones, and for viewing and editing records. Most cloud DNS operators implement their own custom APIs for changing DNS records. Application developers are tempted to add support for long lists of those custom APIs to their applications so they can make automated DNS changes (even just for handling ACME verification through DNS). This is time-consuming and error-prone. Developers can instead settle on the standard DNS interfaces with UPDATE/AXFR/NOTIFY, talking either directly to DNS servers that implement them (like BIND, Knot), or talking to dnsclay which does the translating. Dnsclay implements TLS with the option for client certificate authentication (mutual TLS) based on public keys (ignoring certificate name/expiration/constraints, keeping it simple). DNS TSIG (RFC 8945) is also supported. Dnsclay helps diagnosing errors by returning error responses with Extended DNS Errors (RFC 8914) to requests with EDNS0. Dnsclay does not answer regular DNS queries for records (recursive or authoritative), with the exception of giving authoritative answers to SOA queries. Clients can use this to check if the zone has been updated before deciding to do an AXFR of the full zone. One of the implemented backend providers, "rfc2136", connects to DNS servers implementing the standard DNS UPDATE/AXFR protocols, making dnsclay a web-based zone editor for standard DNS servers. Like secondary DNS servers, dnsclay periodically fetches the SOA record from authoritative name servers, and does an AXFR if the zone serial changes. Dnsclay also periodically does a full sync regardless of SOA serial, since some DNS operators don't change the serial when a zone changes. In such cases, dnsclay will keep track of its own serial, so its clients can properly detect zone changes. The "refresh interval" from the SOA record is not used, since it is often configured to work only with the setup of the primary/secondary servers of the DNS operator. After a change to a zone, either because of DNS UPDATE through dnsclay or by dnsclay detecting a record change at the DNS operator, dnsclay will temporarily increase the interval with which it checks again for a new update, speculating more changes are coming. Timely notification of DNS record changes is useful during lock-step changes like key rollovers. Cloud DNS operators typically don't have a mechanism to notify applications of changes to records. DNS UPDATE/AXFR/NOTIFY may look relatively complicated to application developers interested in making automated DNS changes. They may be expecting a HTTP/JSON API. If one is standardized, dnsclay could implement it. Changes in a DNS UPDATE request must be applied atomically: Either all the changes in a request must be applied, or none. Dnsclay cannot implement this requirement for all requests. With the libdns API, records cannot be added and removed atomically. Cloud DNS operators may have unexpected limitations. If standard DNS resource record types are not implemented, adding them may result in an error. The dnsclay server does not process multiple messages on a single TCP connection in parallel. It reads a request, process it, and writes a response, then starts on the next request. Multiple connections, and UDP packets, are handled in parallel. The following providers are implemented in dnsclay, with community-provided implementations maintained at https://github.com/libdns:
Package gizmo is a toolkit that provides packages to put together server and pubsub daemons with the following features: ## The `config` packages The `config` package contains a handful of useful functions to load to configuration structs from JSON files, JSON blobs in Consul k/v or environment variables. The subpackages contain structs meant for managing common configuration options and credentials. There are currently configs for: The package also has a generic `Config` type in the `config/combined` subpackage that contains all of the above types. It's meant to be a 'catch all' convenience struct that many applications should be able to use. The `server` package This package is the bulk of the toolkit and relies on `server.Config` for any managing `Server` implementations. A server must implement the following interface: The package offers 2 server implementations: `SimpleServer`, which is capable of handling basic HTTP and JSON requests via 3 of the available `Service` implementations: `SimpleService`, `JSONService`, `ContextService`, `MixedService` and `MixedContextService`. A service and these implenetations will be defined below. `RPCServer`, which is capable of serving a gRPC server on one port and JSON endpoints on another. This kind of server can only handle the `RPCService` implementation. The `Service` interface is minimal to allow for maximum flexibility: The 3 service types that are accepted and hostable on the `SimpleServer`: Where `JSONEndpoint`, `JSONContextEndpoint`, `ContextHandler` and `ContextHandlerFunc` are defined as: Also, the one service type that works with an `RPCServer`: The `Middleware(..)` functions offer each service a 'hook' to wrap each of its endpoints. This may be handy for adding additional headers or context to the request. This is also the point where other, third-party middleware could be easily be plugged in (ie. oauth, tracing, metrics, logging, etc.) This package contains two generic interfaces for publishing data to queues and subscribing and consuming data from those queues. Where a `SubscriberMessage` is an interface that gives implementations a hook for acknowledging/delete messages. Take a look at the docs for each implementation in `pubsub` to see how they behave. There are currently 3 implementations of each type of `pubsub` interfaces: For pubsub via Amazon's SNS/SQS, you can use the `pubsub/aws` package. For pubsub via Google's Pubsub, you can use the `pubsub/gcp` package. For pubsub via Kafka topics, you can use the `pubsub/kafka` package. For publishing via HTTP, you can use the `pubsub/http` package. The `pubsub/pubsubtest` package This package contains 'test' implementations of the `pubsub.Publisher` and `pubsub.Subscriber` interfaces that will allow developers to easily mock out and test their `pubsub` implementations: This package contains a handful of very useful functions for parsing types from request queries and payloads. For examples of how to use the gizmo `server` and `pubsub` packages, take a look at the 'examples' subdirectory. The Gizmo logo was based on the Go mascot designed by Renée French and copyrighted under the Creative Commons Attribution 3.0 license.
Package apicurio provides a Go client library for interacting with the Apicurio Registry. The library enables developers to seamlessly integrate with the Apicurio Registry for managing, evolving, and validating schemas in a variety of serialization formats. Features: - CRUD operations on schemas (artifacts) including creation, retrieval, update, and deletion. - Management of schema versions, branches, and metadata. - Group-based organization for schemas to support multi-tenancy. - Schema validation and compatibility checks for supported formats such as Avro, Protobuf, and JSON Schema. - System-level operations such as retrieving registry status and configuration. Structure: The library is structured into the following key components: **Client**: Provides an entry point for interacting with the registry. Use the `client.NewApicurioClient` function to create a new client instance. **APIs**: Contains modular functions for specific operations such as managing artifacts, branches, versions, groups, and performing administrative tasks. 3. **Models**: Defines data structures for requests, responses, and errors used across the library. Example Usage: The following example demonstrates how to create a new artifact and retrieve its metadata:
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/pact-foundation/pact-go/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/realestate-com-au/pact/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: Note that `PactURLs` can be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: 2. Use "PactBroker" to automatically find all of the latest consumers: 3. Use "PactBroker" and "Tags" to automatically find all of the latest consumers: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires 2 running API endpoints to retrieve and configure available states during the verification process. The two options you must provide to the dsl.VerifyRequest are: Example routes using the standard Go http package might look like this, note the `/states` endpoint returns a list of available states for each known consumer: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so:
Package xrgrpc is a gRPC Client library for Cisco IOS XR devices. It exposes different RPC's to manage the device(s). The objective is to have a single interface to retrieve info from the device, apply configs to it, generate telemetry streams and program the RIB/FIB. The GetConfig service retrieves the configuration from a target device for the YANG paths specified. ShowCmdJSONOutput and ShowCmdTextOutput services return show command outputs JSON encoded or as unstructured text correspondingly. Tutorials to setup a testbed have been posted at: - Programming IOS-XR with gRPC and Go: https://xrdocs.github.io/programmability/tutorials/2017-08-04-programming-ios-xr-with-grpc-and-go/ - Validate the intent of network config changes: https://xrdocs.github.io/programmability/tutorials/2017-08-14-validate-the-intent-of-network-config-changes/
Package jobqueue manages running and scheduling jobs. Applications using jobqueue first create a Manager. One manager handles one or more topics. There is one processor per topic. Applications need to register topics and their processors before starting the manager. Once started, the manager initializes the list of workers that will work on the actual jobs. At the beginning, all workers are idle. The manager has a Store to implement persistent storage. By default, an in memory store is used. There is a MySQL-based persistent store in the "mysql" package. New jobs are added to the manager via the Add method. The manager asks the store to create the job. A scheduler inside manager periodically asks the Store for jobs in the Waiting state. The scheduler will tell idle workers to handle those jobs. The number of concurrent jobs can be specified via the manager option SetConcurrency. A job in jobqueue has always in one of these four states: Waiting (to be executed), Working (currently busy working on a job), Succeeded (completed successfully), and Failed (failed to complete successfully even after retrying). A job can be configured to be retried. To do so, specify the MaxRetry field in Job. Only if the number of retries exceeds the MaxRetry value, the job gets marked as failed. Otherwise, it gets put back into Waiting state and rescheduled (after an some backoff time). The backoff function is exponential by default (see backoff.go). However, one can specify a custom backoff function by the manager option SetBackoffFunc. If the manager crashes and gets restarted, the Store gets started via the Start method. This gives the store implementation a chance to do cleanup. E.g. the MySQL-based store implementation moves all jobs still marked as Working into the Failed state. Notice that you are responsible to prevent that two concurrent managers try to access the same database!
Package dsql provides the API client, operations, and parameter types for Amazon Aurora DSQL. This is an interface reference for Amazon Aurora DSQL. It contains documentation for one of the programming or command line interfaces you can use to manage Amazon Aurora DSQL. Amazon Aurora DSQL is a serverless, distributed SQL database suitable for workloads of any size. Aurora DSQL is available in both single-Region and multi-Region configurations, so your clusters and databases are always available even if an Availability Zone or an Amazon Web Services Region are unavailable. Aurora DSQL lets you focus on using your data to acquire new insights for your business and customers.
Help editor to Input cjk language. Author fuhuizn@163.com Package gocui allows to create console user interfaces. Create a new GUI: Set GUI managers: Managers are in charge of GUI's layout and can be used to build widgets. On each iteration of the GUI's main loop, the Layout function of each configured manager is executed. Managers are used to set-up and update the application's main views, being possible to freely change them during execution. Also, it is important to mention that a main loop iteration is executed on each reported event (key-press, mouse event, window resize, etc). GUIs are composed by Views, you can think of it as buffers. Views implement the io.ReadWriter interface, so you can just write to them if you want to modify their content. The same is valid for reading. Create and initialize a view with absolute coordinates: Views can also be created using relative coordinates: Configure keybindings: gocui implements full mouse support that can be enabled with: Mouse events are handled like any other keybinding: IMPORTANT: Views can only be created, destroyed or updated in three ways: from the Layout function within managers, from keybinding callbacks or via *Gui.Update(). The reason for this is that it allows gocui to be concurrent-safe. So, if you want to update your GUI from a goroutine, you must use *Gui.Update(). For example: By default, gocui provides a basic edition mode. This mode can be extended and customized creating a new Editor and assigning it to *View.Editor: DefaultEditor can be taken as example to create your own custom Editor: Colored text: Views allow to add colored text using ANSI colors. For example: For more information, see the examples in folder "_examples/".