Package seelog implements logging functionality with flexible dispatching, filtering, and formatting. To create a logger, use one of the following constructors: Example: The "defer" line is important because if you are using asynchronous logger behavior, without this line you may end up losing some messages when you close your application because they are processed in another non-blocking goroutine. To avoid that you explicitly defer flushing all messages before closing. Logger created using one of the LoggerFrom* funcs can be used directly by calling one of the main log funcs. Example: Having loggers as variables is convenient if you are writing your own package with internal logging or if you have several loggers with different options. But for most standalone apps it is more convenient to use package level funcs and vars. There is a package level var 'Current' made for it. You can replace it with another logger using 'ReplaceLogger' and then use package level funcs: Last lines do the same as In this example the 'Current' logger was replaced using a 'ReplaceLogger' call and became equal to 'logger' variable created from config. This way you are able to use package level funcs instead of passing the logger variable. Main seelog point is to configure logger via config files and not the code. The configuration is read by LoggerFrom* funcs. These funcs read xml configuration from different sources and try to create a logger using it. All the configuration features are covered in detail in the official wiki: https://github.com/cihub/seelog/wiki. There are many sections covering different aspects of seelog, but the most important for understanding configs are: After you understand these concepts, check the 'Reference' section on the main wiki page to get the up-to-date list of dispatchers, receivers, formats, and logger types. Here is an example config with all these features: This config represents a logger with adaptive timeout between log messages (check logger types reference) which logs to console, all.log, and errors.log depending on the log level. Its output formats also depend on log level. This logger will only use log level 'debug' and higher (minlevel is set) for all files with names that don't start with 'test'. For files starting with 'test' this logger prohibits all levels below 'error'. Although configuration using code is not recommended, it is sometimes needed and it is possible to do with seelog. Basically, what you need to do to get started is to create constraints, exceptions and a dispatcher tree (same as with config). Most of the New* functions in this package are used to provide such capabilities. Here is an example of configuration in code, that demonstrates an async loop logger that logs to a simple split dispatcher with a console receiver using a specified format and is filtered using a top-level min-max constraints and one expection for the 'main.go' file. So, this is basically a demonstration of configuration of most of the features: To learn seelog features faster you should check the examples package: https://github.com/cihub/seelog-examples It contains many example configs and usecases.
Package btcrpcclient implements a websocket-enabled Bitcoin JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Bitcoin RPC server that uses a btcd/bitcoin core compatible Bitcoin JSON-RPC API. This client has been tested with btcd (https://github.com/btcsuite/btcd), btcwallet (https://github.com/btcsuite/btcwallet), and bitcoin core (https://github.com/bitcoin). In addition to the compatible standard HTTP POST JSON-RPC API, btcd and btcwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to btcd or btcwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by btcd and btcwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by btcd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by btcd (and btcwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that btcd intentionally separates the wallet functionality into a separate process named btcwallet. This means if you are connected to the btcd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, btcwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *btcjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package poller is a file-descriptor multiplexer. It allows concurent Read and Write operations from and to multiple file-descriptors without allocating one OS thread for every blocked operation. It operates similarly to Go's netpoller (which multiplexes network connections) without requiring special support from the Go runtime. It can be used with tty devices, character devices, pipes, FIFOs, and any file-descriptor that is poll-able (can be used with select(2), epoll(7), etc.) In addition, package poller allows the user to set timeouts (deadlines) for read and write operations, and also allows for safe cancelation of blocked read and write operations; a Close from another go-routine safely cancels ongoing (blocked) read and write operations. Typical usage All operations on poller FDs are thread-safe; you can use the same FD from multiple go-routines. It is, for example, safe to close a file descriptor blocked on a Read or Write call from another go-routine. Linux systems are supported using an implementations based on epoll(7). For other POSIX systems (that is, most Unix-like systems), a more portable fallback implementation based on select(2) is provided. It has the same semantics as the Linux epoll(7)-based implementation, but is expected to be of lower performance. The select(2)-based implementation uses CGo for some ancillary select-related operations. Ideally, system-specific io-multiplexing or async-io facilities (e.g. kqueue(2), or /dev/poll(7)) should be used to provide higher performance implementations for other systems. Patches for this will be greatly appreciated. If you wish, you can build package poller on Linux to use the select(2)-based implementation instead of the epoll(7) one. To do this define the build-tag "noepoll". Normally, there is no reason to do this.
Package workerpool provides simple async workers
Package xkafka provides consumers & producers to work efficiently with Kafka. `Message` is the core data structure for xkafka. `xkafka` comes with implementations that support: ## Consumer The processing mode is determined by the xkafka.Concurrency option. By default, the consumer is initialized with `enable.auto.offset.store=false`. The offset is "stored" after the message is processed. The offset is "committed" based on the `auto.commit.interval.ms` options. It is important to understand the difference between "store" and "commit". The offset is "stored" in the consumer's memory and is "committed" to Kafka. The offset is "stored" after the message is processed and the `message.Status` is Success or Skip. The stored offsets will be automatically committed, unless the `ManualCommit` option is enabled. ### Error Handling By default, xkafka.Consumer will stop processing, commit last stored offset, and exit if there is a Kafka error or if the handler returns an error. Errors can be handled by using one or more of the following options: Within the handler implementation Using error handling & retry middlewares through the catch all xkafka.ErrorHandler option xkafka.ErrorHandler is called for every error that is not handled by the handler or the middlewares. It is also called for errors returned by underlying Kafka client. ### Sequential Processing Sequential processing is the default mode. It is same as xkafka.Concurrency(1). ### Async Processing Async processing is enabled by setting xkafka.Concurrency to a value greater than 1. The consumer will use a pool of Go routines to process messages concurrently. Offsets are stored and committed in the order that the messages are received. ### Manual Commit By default, the consumer will automatically commit the offset based on the `auto.commit.interval.ms` option, asynchronously in the background. The consumer can be configured to commit the offset manually by setting `xkafka.EnableManualCommit` option to true. When ManualCommit is enabled, the consumer will synchronously commit the offset after each message is processed. NOTE: Enabling ManualCommit will add an overhead to each message. It is recommended to use ManualCommit only when necessary.
IronFunctions daemon Refer to detailed documentation at https://github.com/iron-io/functions/tree/master/docs Environment Variables: DB_URL The database URL to use in URL format. See [Databases](databases/README.md) for more information. Default: BoltDB in current working directory `bolt.db`. MQ_URL The message queue to use in URL format. See [Message Queues](mqs/README.md) for more information. Default: BoltDB in current working directory `queue.db`. API_URL The primary IronFunctions API URL to that this instance will talk to. In a production environment, this would be your load balancer URL. PORT Sets the port to run on. Default: `8080`. NUM_ASYNC The number of async runners in the functions process (default 1). LOG_LEVEL Set to `DEBUG` to enable debugging. Default: INFO.
Package chromedp is a high level Chrome DevTools Protocol client that simplifies driving browsers for scraping, unit testing, or profiling web pages using the CDP. chromedp requires no third-party dependencies, implementing the async Chrome DevTools Protocol entirely in Go. This package includes a number of simple examples. Additionally, chromedp/examples contains more complex examples.