Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation. To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases. The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the SyncProducer can still sometimes be lost. To consume messages, use the Consumer. Note that Sarama's Consumer implementation does not currently support automatic consumer-group rebalancing and offset tracking. For Zookeeper-based tracking (Kafka 0.8.2 and earlier), the https://github.com/wvanbergen/kafka library builds on Sarama to add this support. For Kafka-based tracking (Kafka 0.9 and later), the https://github.com/bsm/sarama-cluster library builds on Sarama to add this support. For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire; the Client provides higher-level metadata management that is shared between the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry. Broker related metrics: Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics. Producer related metrics:
Package libvirt-go-xml defines structs for parsing libvirt XML schemas The libvirt API uses XML schemas/documents to describe the configuration of many of its managed objects. Thus when using the libvirt-go package, it is often neccessary to either parse or format XML documents. This package defines a set of Go structs which have been annotated for use with the encoding/xml API to manage libvirt XML documents. Example creating a domain XML document from configuration: Example parsing a domainXML document, in combination with libvirt-go
Package libvirt-go-xml defines structs for parsing libvirt XML schemas The libvirt API uses XML schemas/documents to describe the configuration of many of its managed objects. Thus when using the libvirt-go package, it is often neccessary to either parse or format XML documents. This package defines a set of Go structs which have been annotated for use with the encoding/xml API to manage libvirt XML documents. Example creating a domain XML document from configuration: Example parsing a domainXML document, in combination with libvirt-go
Package irma contains generic IRMA strucs and logic of use to all IRMA participants. It parses irma_configuration folders to scheme managers, issuers, credential types and public keys; it contains various messages from the IRMA protocol; it parses IRMA metadata attributes; and it contains attribute and credential verification logic.
Package webgo is a lightweight framework for building web apps. It has a multiplexer, middleware plugging mechanism & context management of its own. The primary goal of webgo is to get out of the developer's way as much as possible. i.e. it does not enforce you to build your app in any particular pattern, instead just helps you get all the trivial things done faster and easier. e.g. 1. Getting named URI parameters. 2. Multiplexer for regex matching of URI and such. 3. Inject special app level configurations or any such objects to the request context as required.
Package rollbar is a Golang Rollbar client that makes it easy to report errors to Rollbar with full stacktraces. Basic Usage This package is designed to be used via the functions exposed at the root of the `rollbar` package. These work by managing a single instance of the `Client` type that is configurable via the setter functions at the root of the package. If you wish for more fine grained control over the client or you wish to have multiple independent clients then you can create and manage your own instances of the `Client` type. We provide two implementations of the `Transport` interface, `AsyncTransport` and `SyncTransport`. These manage the communication with the network layer. The Async version uses a buffered channel to communicate with the Rollbar API in a separate go routine. The Sync version is fully synchronous. It is possible to create your own `Transport` and configure a Client to use your preferred implementation. Go does not provide a mechanism for handling all panics automatically, therefore we provide two functions `Wrap` and `WrapAndWait` to make working with panics easier. They both take a function and then report to Rollbar if that function panics. They use the recover mechanism to capture the panic, and therefore if you wish your process to have the normal behaviour on panic (i.e. to crash), you will need to re-panic the result of calling `Wrap`. For example, The above pattern of calling `Wrap(...)` and then `Wait(...)` can be combined via `WrapAndWait(...)`. When `WrapAndWait(...)` returns if there was a panic it has already been sent to the Rollbar API. The error is still returned by this function if there is one. Due to the nature of the `error` type in Go, it can be difficult to attribute errors to their original origin without doing some extra work. To account for this, we define the interface `CauseStacker`: One can implement this interface for custom Error types to be able to build up a chain of stack traces. In order to get stack the correct stacks, callers must call BuildStack on their own at the time that the cause is wrapped. This is the least intrusive mechanism for gathering this information due to the decisions made by the Go runtime to not track this information.
Package entityresolution provides the API client, operations, and parameter types for AWS EntityResolution. Welcome to the Entity Resolution API Reference. Entity Resolution is an Amazon Web Services service that provides pre-configured entity resolution capabilities that enable developers and analysts at advertising and marketing companies to build an accurate and complete view of their consumers. With Entity Resolution, you can match source records containing consumer identifiers, such as name, email address, and phone number. This is true even when these records have incomplete or conflicting identifiers. For example, Entity Resolution can effectively match a source record from a customer relationship management (CRM) system with a source record from a marketing system containing campaign information. To learn more about Entity Resolution concepts, procedures, and best practices, see the Entity Resolution User Guide.
Package elasticache provides the client and types for making API requests to Amazon ElastiCache. Amazon ElastiCache is a web service that makes it easier to set up, operate, and scale a distributed cache in the cloud. With ElastiCache, customers get all of the benefits of a high-performance, in-memory cache with less of the administrative burden involved in launching and managing a distributed cache. The service makes setup, scaling, and cluster failure handling much simpler than in a self-managed cache deployment. In addition, through integration with Amazon CloudWatch, customers get enhanced visibility into the key performance statistics associated with their cache and can receive alarms if a part of their cache runs hot. See https://docs.aws.amazon.com/goto/WebAPI/elasticache-2015-02-02 for more information on this service. See elasticache package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/elasticache/ To Amazon ElastiCache with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon ElastiCache client ElastiCache for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/elasticache/#New
Package session provides a convenient way to store session data (such as a user ID) securely in a web browser cookie or other authentication token. Cookie values generated by this package use modern authenticated encryption, so they can't be inspected or altered by client processes. Most users of this package will use functions Set and Get, which manage cookies directly. An analogous pair of functions, Encode and Decode, help when the session data will be stored somewhere other than a browser cookie; for example, an API token configured by hand in an API client process.
Package configure is an easy to use multi-layer configuration system. Examples can be found in the example folder (http://github.com/paked/configure/blob/master/examples/) as well as a getting started guide in the main README file (http://github.com/paked/configure). configure makes use of Checkers, which are used to retrieve values from their respective data sources. There are three built in Checkers, Environment, Flag and JSON. Environment retrieves environment variables. Flag retrieves variables within the flags of a command. JSON retrieves values from a JSON file/blob. Checkers can be essentially thought of as "middlewear for configuration", in fact parts of the package API was inspired by negroni (https://github.com/codegangsta/negroni, the awesome net/http middlewear manager) and the standard library's flag package. It is very easy to create your own Checkers, all they have to do is satisfy the Checker interface. That is an, Int method, String method and a Bool method. These functions are used to retrieve their respective data types. A setup method is also required, where the Checker should initialize itself and throw any errors. If you do create your own Checkers I would be more than happy to add a link to the README in the github repository.
Package peer provides a common base for creating and managing Bitcoin network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the bitcoin wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol Full duplex reading and writing of bitcoin protocol messages Automatic handling of the initial handshake process including protocol version negotiation Asynchronous message queuing of outbound messages with optional channel for notification when the message is actually sent Flexible peer configuration Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections asthey see fit (proxies, etc) User agent name and version Bitcoin network Service support signalling (full nodes, bloom filters, etc) Maximum supported protocol version Ability to register callbacks for handling bitcoin protocol messages Inventory message batching and send trickling with known inventory detection and avoidance Automatic periodic keep-alive pinging and pong responses Random Nonce generation and self connection detection Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support Disconnects the peer when the protocol version is high enough Does not invoke the related callbacks for older protocol versions Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version Helper functions pushing addresses, getblocks, getheaders, and reject messages These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization Ability to wait for shutdown/disconnect Comprehensive test coverage All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the bitcoin network to use, which services it supports, and callbacks to invoke when bitcoin messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to bitcoin messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported bitcoin messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all BIPS supported by the wire package. (https://godoc.org/github.com/p9c/pod/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Inertia is the command line interface that helps you set up your remote for continuous deployment and allows you to manage your deployment through configuration options and various commands. This document contains basic usage instructions, but a new usage guide is also available here: https://inertia.ubclaunchpad.com/ Inertia can be installed in several ways: Users of other platforms can install the Inertia CLI from the Releases page, found here: https://github.com/ubclaunchpad/inertia/releases/latest To help with usage, most relevant documentation can be seen by using the --help flag on any command: Documentation can also be triggered by simply entering a command without the prerequisite arguments or additional commands: Inertia has two "core" sets of commands - one that primarily handles local configuration, and one that allows you to control your remote VPS instances and their associated deployments. For local configuration, most commands will build off of the root "inertia ..." command. For example, a typical set of commands to set up a project might look like: The other set of commands are based on a remote VPS configuration, and the available commands can be seen by running: In the previous example, the next steps to set up a deployment might be: Some of these commands offer a --stream flag that allows you to view realtime log feedback from the daemon. More documentation on Inertia, how it works, and how to use it can be found in the project repository: https://github.com/ubclaunchpad/inertia/tree/master
Package pgx is a PostgreSQL database driver. pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jackc/pgx/stdlib to use pgx as a database/sql compatible driver. pgx implements Query and Scan in the familiar database/sql style. pgx also implements QueryRow in the same style as database/sql. Use Exec to execute a query that does not return a result set. Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. The connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. It delegates methods such as QueryRow to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control. pgx maps between all common base types directly between Go and PostgreSQL. In particular: pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer. pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly map to native Go types. pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB. pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6. pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgx/pgtype library to support more types. See documention for that library for instructions on how to implement custom types. See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type. pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces. If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attempt to encode the underlying type. While this is usually desired behavior it can produce suprising behavior if one the underlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. It is recommended that this situation be avoided by implementing pgx interfaces on the renamed type. []byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. Transactions are started by calling Begin or BeginEx. The BeginEx variant can create a transaction with a specified isolation level. Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory. CopyFrom can be faster than an insert with as few as 5 rows. pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification. The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection. pgx has never explicitly supported Postgres < 9.6's `ssl_renegotiation` option. As of v3.3.0, it doesn't send `ssl_renegotiation: 0` either to support Redshift (https://github.com/jackc/pgx/pull/476). If you need TLS Renegotiation, consider supplying `ConnConfig.TLSConfig` with a non-zero `Renegotiation` value and if it's not the default on your server, set `ssl_renegotiation` via `ConnConfig.RuntimeParams`. pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, and the testing log are provided in the log directory.
Package xrgrpc is a gRPC Client library for Cisco IOS XR devices. It exposes different RPC's to manage the device(s). The objective is to have a single interface to retrieve info from the device, apply configs to it, generate telemetry streams and program the RIB/FIB. The GetConfig service retrieves the configuration from a target device for the YANG paths specified. ShowCmdJSONOutput and ShowCmdTextOutput services return show command outputs JSON encoded or as unstructured text correspondingly. Tutorials to setup a testbed have been posted at: - Programming IOS-XR with gRPC and Go: https://xrdocs.github.io/programmability/tutorials/2017-08-04-programming-ios-xr-with-grpc-and-go/ - Validate the intent of network config changes: https://xrdocs.github.io/programmability/tutorials/2017-08-14-validate-the-intent-of-network-config-changes/
Package jobqueue manages running and scheduling jobs. Applications using jobqueue first create a Manager. One manager handles one or more topics. There is one processor per topic. Applications need to register topics and their processors before starting the manager. Once started, the manager initializes the list of workers that will work on the actual jobs. At the beginning, all workers are idle. The manager has a Store to implement persistent storage. By default, an in memory store is used. There is a MySQL-based persistent store in the "mysql" package. New jobs are added to the manager via the Add method. The manager asks the store to create the job. A scheduler inside manager periodically asks the Store for jobs in the Waiting state. The scheduler will tell idle workers to handle those jobs. The number of concurrent jobs can be specified via the manager option SetConcurrency. A job in jobqueue has always in one of these four states: Waiting (to be executed), Working (currently busy working on a job), Succeeded (completed successfully), and Failed (failed to complete successfully even after retrying). A job can be configured to be retried. To do so, specify the MaxRetry field in Job. Only if the number of retries exceeds the MaxRetry value, the job gets marked as failed. Otherwise, it gets put back into Waiting state and rescheduled (after an some backoff time). The backoff function is exponential by default (see backoff.go). However, one can specify a custom backoff function by the manager option SetBackoffFunc. If the manager crashes and gets restarted, the Store gets started via the Start method. This gives the store implementation a chance to do cleanup. E.g. the MySQL-based store implementation moves all jobs still marked as Working into the Failed state. Notice that you are responsible to prevent that two concurrent managers try to access the same database!
Help editor to Input cjk language. Author fuhuizn@163.com Package gocui allows to create console user interfaces. Create a new GUI: Set GUI managers: Managers are in charge of GUI's layout and can be used to build widgets. On each iteration of the GUI's main loop, the Layout function of each configured manager is executed. Managers are used to set-up and update the application's main views, being possible to freely change them during execution. Also, it is important to mention that a main loop iteration is executed on each reported event (key-press, mouse event, window resize, etc). GUIs are composed by Views, you can think of it as buffers. Views implement the io.ReadWriter interface, so you can just write to them if you want to modify their content. The same is valid for reading. Create and initialize a view with absolute coordinates: Views can also be created using relative coordinates: Configure keybindings: gocui implements full mouse support that can be enabled with: Mouse events are handled like any other keybinding: IMPORTANT: Views can only be created, destroyed or updated in three ways: from the Layout function within managers, from keybinding callbacks or via *Gui.Update(). The reason for this is that it allows gocui to be concurrent-safe. So, if you want to update your GUI from a goroutine, you must use *Gui.Update(). For example: By default, gocui provides a basic edition mode. This mode can be extended and customized creating a new Editor and assigning it to *View.Editor: DefaultEditor can be taken as example to create your own custom Editor: Colored text: Views allow to add colored text using ANSI colors. For example: For more information, see the examples in folder "_examples/".
Package nakadi is a client library for the Nakadi event broker. It provides convenient access to many features of Nakadi's API. The package can be used to manage event type definitions. The EventAPI can be used to inspect existing event types and allows further to create new event types and to update existing ones. The SubscriptionAPI provides subscription management: existing subscriptions can be fetched from Nakadi and of course it is also possible to create new ones. The PublishAPI of this package is used to broadcast event types of all event type categories via Nakadi. Last but not least, the package also implements a StreamAPI, which enables event processing on top of Nakadi's subscription based high level API. To make the communication with Nakadi more resilient all sub APIs of this package can be configured to retry failed requests using an exponential back-off algorithm.
Package controllerruntime alias' common functions and types to improve discoverability and reduce the number of imports for simple Controllers. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support
Package ipset provides bindings for linux userspace ipset utility http://ipset.netfilter.org/ipset.man.html Ipset allows for managing iptables rules in complex environments where otherwise iptables rules would become too huge or would have to be updated too often. Similarly, this package provides bindings to configure ipset programmatically. Because ipset is typically used in environment with large ipset configurations it is not practical ro rely on simple command lines like `ipset add` or `ipset create` since thousands of `create` calls would result in thousands of forks. Instead, this package utilizes interactive mode provided by `ipset -` to execute bulks of create/delete/add/flush/swap calls in one session. The internal object to start and control interactive session is called `Handle` which implements `io.Writer` and writes directly into ipset stdin. However, some commands still make more sense when executed one by one like `test`, for that reason this package also provides a set of functions called `oneshots` (Add/Delete/etc...) which can be used when exit code is needed. Since ipset can export its configuration as xml this package provides structures to that can be used to parse ipset xml config. Logging: this package is mostly silent to avoid messing with ipset stderr, but some debug loggin can be enabled using RLOG_TRACE_LEVEL=3 environment variable. Typical session starts as Interactive sessions workflow Pros: useful to create/delete large sets Cons: no error handling Acquire the handle. handle, _ := ipset.NewHandle() Start the session. This is the point where ipset binary is executed and stdin/stdout are attached. _ = handle.Start() Call Add/Delete/etc methods of handle. newSet, _ = ipset.NewSet("mynewset", SetHashNetIface, SetWithComment()) _ = handle.Create(newSet) When you are done shut down the session. This will send shutdown signal to the ipset binary which should exit. _ = handle.Quit() And cleanup the resources. After successful Quit() call ipset binary should be terminated, but resources allocated for handler might still be in use, like stdin/our/err pipes. ctx, cancel := context.WithTimeout(...) _ = handle.Wait(ctx) that's it. And non-interactive session might be useful for commands that require distict error code. Pros: clear error and output Cons: fork per call This package uses options functions as a way to specify desired configuration. This is done to keep default signatures simple like `NewHandle()` while allowing flexible configuration when needed Learn more about options functions https://commandcenter.blogspot.co.nz/2014/01/self-referential-functions-and-design.html
Package rds provides the client and types for making API requests to Amazon Relational Database Service. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Note that Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference For the alphabetical list of API actions, see API Actions (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Operations.html). For the alphabetical list of data types, see Data Types (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Types.html). For a list of common query parameters, see Common Parameters (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonParameters.html). For descriptions of the error codes, see Common Errors (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html). Amazon RDS User Guide For a summary of the Amazon RDS interfaces, see Available RDS Interfaces (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html#Welcome.Interfaces). For more information about how to use the Query API, see Using the Query API (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Using_the_Query_API.html). See https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31 for more information on this service. See rds package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/rds/ To Amazon Relational Database Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Relational Database Service client RDS for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/rds/#New The rdsutil package's BuildAuthToken function provides a connection authentication token builder. Given an endpoint of the RDS database, AWS region, DB user, and AWS credentials the function will create an presigned URL to use as the authentication token for the database's connection. The following example shows how to use BuildAuthToken to create an authentication token for connecting to a MySQL database in RDS. See rdsutil package for more information. http://docs.aws.amazon.com/sdk-for-go/api/service/rds/rdsutils/
Package resourcegroupstaggingapi provides the client and types for making API requests to AWS Resource Groups Tagging API. This guide describes the API operations for the resource groups tagging. A tag is a label that you assign to an AWS resource. A tag consists of a key and a value, both of which you define. For example, if you have two Amazon EC2 instances, you might assign both a tag key of "Stack." But the value of "Stack" might be "Testing" for one and "Production" for the other. Tagging can help you organize your resources and enables you to simplify resource management, access management and cost allocation. For more information about tagging, see Working with Tag Editor (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/tag-editor.html) and Working with Resource Groups (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/resource-groups.html). For more information about permissions you need to use the resource groups tagging APIs, see Obtaining Permissions for Resource Groups (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-resource-groups.html) and Obtaining Permissions for Tagging (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-tagging.html). You can use the resource groups tagging APIs to complete the following tasks: Tag and untag supported resources located in the specified region for the AWS account Use tag-based filters to search for resources located in the specified region for the AWS account List all existing tag keys in the specified region for the AWS account List all existing values for the specified key in the specified region for the AWS account Not all resources can have tags. For a lists of resources that you can tag, see Supported Resources (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/supported-resources.html) in the AWS Resource Groups and Tag Editor User Guide. To make full use of the resource groups tagging APIs, you might need additional IAM permissions, including permission to access the resources of individual services as well as permission to view and apply tags to those resources. For more information, see Obtaining Permissions for Tagging (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-tagging.html) in the AWS Resource Groups and Tag Editor User Guide. See https://docs.aws.amazon.com/goto/WebAPI/resourcegroupstaggingapi-2017-01-26 for more information on this service. See resourcegroupstaggingapi package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroupstaggingapi/ To AWS Resource Groups Tagging API with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Resource Groups Tagging API client ResourceGroupsTaggingAPI for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroupstaggingapi/#New
Package rmhttp implements a lightweight wrapper around the standard library web server provided by http.Server and http.ServeMux, and adds an intuitive fluent interface for easy use and configuration of route grouping, centralised error handling, logging, CORS, panic recovery, SSL configuration, header management, timeouts and middleware.
Package main provides methods for setting up and managing a container environment. This includes setting environment variables, mounting directories and files, and configuring services such as Docker within the container. Copyright: Excoriate alex_torres@outlook.com License: MIT Package main provides utility functions for working with Docker containers. This package includes constants and a function for getting the Docker-in-Docker image. Copyright: Excoriate License: Apache-2.0 A Dagger module for Terragrunt, batteries included. A powerful Dagger module for managing Terragrunt, Terraform, and OpenTofu in a containerized environment. This module provides a comprehensive set of features for Infrastructure as Code, including: - Flexible base image built using APKO for a secure and optimized container environment. - Multi-tool support for Terragrunt, Terraform, and OpenTofu. - Customizable configurations for Terragrunt and Terraform settings. - Caching mechanisms for improved performance. - Optional AWS CLI integration. - Fine-grained control over directory permissions. - Easy management of environment variables. - Secure handling of sensitive information like Terraform tokens. - Execution flexibility to run Terragrunt, Terraform, or shell commands within the container. The module is designed to be highly configurable and extesible. Package main provides functionality for managing infrastructure-as-code toolkit versions.
Package main provides methods for setting up and managing a container environment. This includes setting environment variables, mounting directories and files, and configuring services such as Docker within the container. Copyright: Excoriate alex_torres@outlook.com License: MIT Package main provides utility functions for working with Docker containers. This package includes constants and a function for getting the Docker-in-Docker image. Copyright: Excoriate License: Apache-2.0 Package main provides the Gotoolbox Dagger module and related functions. This module has been generated via dagger init and serves as a reference to basic module structure as you get started with Dagger. The module demonstrates usage of arguments and return types using simple echo and grep commands. The functions can be called from the dagger CLI or from one of the SDKs. The first line in this comment block is a short description line and the rest is a long description with more detail on the module's purpose or usage, if appropriate. All modules should have a short description.
Package webgo is a lightweight framework for building web apps. It has a multiplexer, middleware plugging mechanism & context management of its own. The primary goal of webgo is to get out of the developer's way as much as possible. i.e. it does not enforce you to build your app in any particular pattern, instead just helps you get all the trivial things done faster and easier. e.g. 1. Getting named URI parameters. 2. Multiplexer for regex matching of URI and such. 3. Inject special app level configurations or any such objects to the request context as required.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package api is the root of the packages used to access Google Cloud Services. See https://godoc.org/google.golang.org/api for a full list of sub-packages. Within api there exist numerous clients which connect to Google APIs, and various utility packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in https://godoc.org/google.golang.org/api/transport for more details. Due to the auto-generated nature of this collection of libraries, complete APIs or specific versions can appear or go away without notice. As a result, you should always locally vendor any API(s) that your code relies upon. Google APIs follow semver as specified by https://cloud.google.com/apis/design/versioning. The code generator and the code it produces - the libraries in the google.golang.org/api/... subpackages - are beta. Note that versioning and stability is strictly not communicated through Go modules. Go modules are used only for dependency management. Many parameters are specified using ints. However, underlying APIs might operate on a finer granularity, expecting int64, int32, uint64, or uint32, all of whom have different maximum values. Subsequently, specifying an int parameter in one of these clients may result in an error from the API because the value is too large. To see the exact type of int that the API expects, you can inspect the API's discovery doc. A global catalogue pointing to the discovery doc of APIs can be found at https://www.googleapis.com/discovery/v1/apis. This field can be found on all Request/Response structs in the generated clients. All of these types have the JSON `omitempty` field tag present on their fields. This means if a type is set to its default value it will not be marshalled. Sometimes you may actually want to send a default value, for instance sending an int of `0`. In this case you can override the `omitempty` feature by adding the field name to the `ForceSendFields` slice. See docs on any struct for more details. This may be used to include empty fields in Patch requests. This field can be found on all Request/Response structs in the generated clients. It can be be used to send JSON null values for the listed fields. By default, fields with empty values are omitted from API requests because of the presence of the `omitempty` field tag on all fields. However, any field with an empty value appearing in NullFields will be sent to the server as null. It is an error if a field in this list has a non-empty value. This may be used to include null fields in Patch requests. An error returned by a client's Do method may be cast to a *googleapi.Error or unwrapped to an *apierror.APIError. The https://pkg.go.dev/google.golang.org/api/googleapi#Error type is useful for getting the HTTP status code: The https://pkg.go.dev/github.com/googleapis/gax-go/v2/apierror#APIError type is useful for inspecting structured details of the underlying API response, such as the reason for the error and the error domain, which is typically the registered service name of the tool or product that generated the error: If an API call returns an Operation, that means it could take some time to complete the work initiated by the API call. Applications that are interested in the end result of the operation they initiated should wait until the Operation.Done field indicates it is finished. To do this, use the service's Operation client, and a loop, like so:
Package pgx is a PostgreSQL database driver. pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jackc/pgx/stdlib to use pgx as a database/sql compatible driver. pgx implements Query and Scan in the familiar database/sql style. pgx also implements QueryRow in the same style as database/sql. Use Exec to execute a query that does not return a result set. Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. The connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. It delegates methods such as QueryRow to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control. pgx maps between all common base types directly between Go and PostgreSQL. In particular: pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer. pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly map to native Go types. pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB. pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6. pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgx/pgtype library to support more types. See documention for that library for instructions on how to implement custom types. See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type. pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces. If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attempt to encode the underlying type. While this is usually desired behavior it can produce suprising behavior if one the underlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. It is recommended that this situation be avoided by implementing pgx interfaces on the renamed type. []byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. Transactions are started by calling Begin or BeginEx. The BeginEx variant can create a transaction with a specified isolation level. Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory. CopyFrom can be faster than an insert with as few as 5 rows. pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification. The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection. pgx has never explicitly supported Postgres < 9.6's `ssl_renegotiation` option. As of v3.3.0, it doesn't send `ssl_renegotiation: 0` either to support Redshift (https://github.com/jackc/pgx/pull/476). If you need TLS Renegotiation, consider supplying `ConnConfig.TLSConfig` with a non-zero `Renegotiation` value and if it's not the default on your server, set `ssl_renegotiation` via `ConnConfig.RuntimeParams`. pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, and the testing log are provided in the log directory.
Package cgi implements the common gateway interface (CGI) for Caddy 2, a modern, full-featured, easy-to-use web server. It has been forked from the fantastic work of Kurt Jung who wrote that plugin for Caddy 1. This plugin lets you generate dynamic content on your website by means of command line scripts. To collect information about the inbound HTTP request, your script examines certain environment variables such as PATH_INFO and QUERY_STRING. Then, to return a dynamically generated web page to the client, your script simply writes content to standard output. In the case of POST requests, your script reads additional inbound content from standard input. The advantage of CGI is that you do not need to fuss with server startup and persistence, long term memory management, sockets, and crash recovery. Your script is called when a request matches one of the patterns that you specify in your Caddyfile. As soon as your script completes its response, it terminates. This simplicity makes CGI a perfect complement to the straightforward operation and configuration of Caddy. The benefits of Caddy, including HTTPS by default, basic access authentication, and lots of middleware options extend easily to your CGI scripts. CGI has some disadvantages. For one, Caddy needs to start a new process for each request. This can adversely impact performance and, if resources are shared between CGI applications, may require the use of some interprocess synchronization mechanism such as a file lock. Your server's responsiveness could in some circumstances be affected, such as when your web server is hit with very high demand, when your script's dependencies require a long startup, or when concurrently running scripts take a long time to respond. However, in many cases, such as using a pre-compiled CGI application like fossil or a Lua script, the impact will generally be insignificant. Another restriction of CGI is that scripts will be run with the same permissions as Caddy itself. This can sometimes be less than ideal, for example when your script needs to read or write files associated with a different owner. Serving dynamic content exposes your server to more potential threats than serving static pages. There are a number of considerations of which you should be aware when using CGI applications. CGI scripts should be located outside of Caddy's document root. Otherwise, an inadvertent misconfiguration could result in Caddy delivering the script as an ordinary static resource. At best, this could merely confuse the site visitor. At worst, it could expose sensitive internal information that should not leave the server. Mistrust the contents of PATH_INFO, QUERY_STRING and standard input. Most of the environment variables available to your CGI program are inherently safe because they originate with Caddy and cannot be modified by external users. This is not the case with PATH_INFO, QUERY_STRING and, in the case of POST actions, the contents of standard input. Be sure to validate and sanitize all inbound content. If you use a CGI library or framework to process your scripts, make sure you understand its limitations. An error in a CGI application is generally handled within the application itself and reported in the headers it returns. Your CGI application can be executed directly or indirectly. In the direct case, the application can be a compiled native executable or it can be a shell script that contains as its first line a shebang that identifies the interpreter to which the file's name should be passed. Caddy must have permission to execute the application. On Posix systems this will mean making sure the application's ownership and permission bits are set appropriately; on Windows, this may involve properly setting up the filename extension association. In the indirect case, the name of the CGI script is passed to an interpreter such as lua, perl or python. - This module needs to be installed (obviously). - The directive needs to be registered in the Caddyfile: The basic cgi directive lets you add a handler in the current caddy router location with a given script and optional arguments. The matcher is a default caddy matcher that is used to restrict the scope of this directive. The directive can be repeated any reasonable number of times. Here is the basic syntax: For example: When a request such as https://example.com/report or https://example.com/report/weekly arrives, the cgi middleware will detect the match and invoke the script named /usr/local/cgi-bin/report. The current working directory will be the same as Caddy itself. Here, it is assumed that the script is self-contained, for example a pre-compiled CGI application or a shell script. Here is an example of a standalone script, similar to one used in the cgi plugin's test suite: The environment variables PATH_INFO and QUERY_STRING are populated and passed to the script automatically. There are a number of other standard CGI variables included that are described below. If you need to pass any special environment variables or allow any environment variables that are part of Caddy's process to pass to your script, you will need to use the advanced directive syntax described below. Beware that in Caddy v2 it is (currently) not possible to separate the path left of the matcher from the full URL. Therefore if you require your CGI program to know the SCRIPT_NAME, make sure to pass that explicitly: In order to specify custom environment variables, pass along one or more environment variables known to Caddy, or specify more than one match pattern for a given rule, you will need to use the advanced directive syntax. That looks like this: For example, The script_name subdirective helps the cgi module to separate the path to the script from the (virtual) path afterwards (which shall be passed to the script). env can be used to define a list of key=value environment variable pairs that shall be passed to the script. pass_env can be used to define a list of environment variables of the Caddy process that shall be passed to the script. If your CGI application runs properly at the command line but fails to run from Caddy it is possible that certain environment variables may be missing. For example, the ruby gem loader evidently requires the HOME environment variable to be set; you can do this with the subdirective pass_env HOME. Another class of problematic applications require the COMPUTERNAME variable. The pass_all_env subdirective instructs Caddy to pass each environment variable it knows about to the CGI excutable. This addresses a common frustration that is caused when an executable requires an environment variable and fails without a descriptive error message when the variable cannot be found. These applications often run fine from the command prompt but fail when invoked with CGI. The risk with this subdirective is that a lot of server information is shared with the CGI executable. Use this subdirective only with CGI applications that you trust not to leak this information. buffer_limit is used when a http request has Transfer-Endcoding: chunked. The Go CGI Handler refused to handle these kinds of requests, see https://github.com/golang/go/issues/5613. In order to work around this the chunked request is buffered by caddy and sent to the CGI application as a whole with the correct CONTENT_LENGTH set. The buffer_limit setting marks a threshold between buffering in memory and using a temporary file. Every request body smaller than the buffer_limit is buffered in-memory. It accepts all formats supported by go-humanize. Default: 4MiB. (An example of this is git push if the objects to push are larger than the http.postBuffer) With the unbuffered_output subdirective it is possible to instruct the CGI handler to flush output from the CGI script as soon as possible. By default, the output is buffered into chunks before it is being written to optimize the network usage and allow to determine the Content-Length. When unbuffered, bytes will be written as soon as possible. This will also force the response to be written in chunked encoding. If you run into unexpected results with the CGI plugin, you are able to examine the environment in which your CGI application runs. To enter inspection mode, add the subdirective inspect to your CGI configuration block. This is a development option that should not be used in production. When in inspection mode, the plugin will respond to matching requests with a page that displays variables of interest. In particular, it will show the replacement value of {match} and the environment variables to which your CGI application has access. For example, consider this example CGI block: When you request a matching URL, for example, the Caddy server will deliver a text page similar to the following. The CGI application (in this case, wapptclsh) will not be called. This information can be used to diagnose problems with how a CGI application is called. To return to operation mode, remove or comment out the inspect subdirective. In this example, the Caddyfile looks like this: Note that a request for /show gets mapped to a script named /usr/local/cgi-bin/report/gen. There is no need for any element of the script name to match any element of the match pattern. The contents of /usr/local/cgi-bin/report/gen are: The purpose of this script is to show how request information gets communicated to a CGI script. Note that POST data must be read from standard input. In this particular case, posted data gets stored in the variable POST_DATA. Your script may use a different method to read POST content. Secondly, the SCRIPT_EXEC variable is not a CGI standard. It is provided by this middleware and contains the entire command line, including all arguments, with which the CGI script was executed. When a browser requests the response looks like When a client makes a POST request, such as with the following command the response looks the same except for the following lines: This small example demonstrates how to write a CGI program in Go. The use of a bytes.Buffer makes it easy to report the content length in the CGI header. When this program is compiled and installed as /usr/local/bin/servertime, the following directive in your Caddy file will make it available: The module is written in a way that it expects the scripts you want it to execute to actually exist. A non-existing or non-executable file is considered a setup error and will yield a HTTP 500. If you want to make sure, only existing scripts are executed, use a more specific matcher, as explained in the Caddy docs. Example: When calling a url like /cgi/foo/bar.pl it will check if the local file ./app/foo/bar.pl exists and only then it will proceed with calling the CGI.
Package marketplacereporting provides the API client, operations, and parameter types for AWS Marketplace Reporting Service. The Amazon Web Services Marketplace GetBuyerDashboard API enables you to get a procurement insights dashboard programmatically. The API gets the agreement and cost analysis dashboards with data for all of the Amazon Web Services accounts in your Amazon Web Services Organization. To use the Amazon Web Services Marketplace Reporting API, you must complete the following prerequisites: Enable all features for your organization. For more information, see Enabling all features for an organization with Organizations, in the Organizations User Guide. Call the service as the Organizations management account or an account registered as a delegated administrator for the procurement insights service. For more information about management accounts, see Tutorial: Creating and configuring an organizationand Managing the management account with Organizations, both in the For more information about delegated administrators, see Using delegated administrators, in the Amazon Web Access can be shared only by registering the desired linked account as a
Package fabricsdk enables Go developers to build solutions that interact with Hyperledger Fabric. pkg/fabsdk: The main package of the Fabric SDK. This package enables creation of contexts based on configuration. These contexts are used by the client packages listed below. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/fabsdk pkg/client/channel: Provides channel transaction capabilities. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/channel pkg/client/event: Provides channel event capabilities. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/event pkg/client/ledger: Enables queries to a channel's underlying ledger. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/ledger pkg/client/resmgmt: Provides resource management capabilities such as installing chaincode. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/resmgmt pkg/client/msp: Enables identity management capability. Reference: https://godoc.org/github.com/hyperledger/fabric-sdk-go/pkg/client/msp Basic workflow
Package main provides methods for setting up and managing a container environment. This includes setting environment variables, mounting directories and files, and configuring services such as Docker within the container. Copyright: Excoriate alex_torres@outlook.com License: MIT Package main provides utility functions for working with Docker containers. This package includes constants and a function for getting the Docker-in-Docker image. Copyright: Excoriate License: Apache-2.0 Package main provides the ModuleTemplateLight Dagger module and related functions. This module has been generated via dagger init and serves as a reference to basic module structure as you get started with Dagger. The module demonstrates usage of arguments and return types using simple echo and grep commands. The functions can be called from the dagger CLI or from one of the SDKs. The first line in this comment block is a short description line and the rest is a long description with more detail on the module's purpose or usage, if appropriate. All modules should have a short description.
Package main provides methods for setting up and managing a container environment. This includes setting environment variables, mounting directories and files, and configuring services such as Docker within the container. Copyright: Excoriate alex_torres@outlook.com License: MIT Package main provides utility functions for working with Docker containers. This package includes constants and a function for getting the Docker-in-Docker image. Copyright: Excoriate License: Apache-2.0 Package main provides the ModuleTemplate Dagger module and related functions. This module has been generated via dagger init and serves as a reference to basic module structure as you get started with Dagger. The module demonstrates usage of arguments and return types using simple echo and grep commands. The functions can be called from the dagger CLI or from one of the SDKs. The first line in this comment block is a short description line and the rest is a long description with more detail on the module's purpose or usage, if appropriate. All modules should have a short description.
Package gconf is an extensible and powerful go configuration manager. This package is aimed at
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/you54f/pact-go/v2/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/pact-foundation/pact-ruby/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: The `VerifyProvider` will handle all verifications, treating them as subtests and giving you granular test reporting. If you don't like this behaviour, you may call `VerifyProviderRaw` directly and handle the errors manually. Note that `PactURLs` may be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires running an API endpoint to configure any [provider states](http://docs.pact.io/documentation/provider_states.html) during the verification process. The option you must provide to the dsl.VerifyRequest is: An example route using the standard Go http package might look like this: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so: