Package sprout provides types and utilities for implementing client and server programs that speak the Sprout Protocol. The Sprout Protocol is specified here: https://man.sr.ht/~whereswaldon/arborchat/specifications/sprout.md NOTE: this package requires using a fork of golang.org/x/crypto, and you must therefore include the following in your `go.mod`: This package exports several important types. The Conn type wraps a connection-oriented transport (usually a TCP connection) and provides methods for sending sprout messages and reading sprout messages off of the connection. It has a number of exported fields which are functions that should handle incoming messages. These must be set by the user, and their behavior should conform to the Sprout specification. If using a Conn directly, be sure to invoke the ReadMessage() method properly to ensure that you receive repies. The Worker type wraps a Conn and provides automatic implementations of both the handler functions for each sprout message and the processing loop that will read new messages and dispatch their handlers. You can send messages on a worker by calling Conn methods via struct embedding. It has an exported embedded Conn. The Conn type has both synchronous and asynchronous methods for sending messages. The synchronous ones block until they recieve a response or their timeout channel emits a value. Details on how to use these methods follow. Note: The Send* methods The non-Async methods block until the get a response or until their timeout is reached. There are several cases in which will return an error: - There is a network problem sending the message or receiving the response - There is a problem creating the outbound message or parsing the inbound response - The status message received in response is not sprout.StatusOk. In this case, the error will be of type sprout.Status The recommended way to invoke synchronous Send*() methods is with a time.Ticker as the input channel, like so: Note: The Send*Async methods The Async versions of each send operation provide more granular control over blocking behavior. They return a chan interface{}, but will never send anything other than a sprout.Status or sprout.Response over that channel. It is safe to assume that the value will be one of those two. The Async versions also return a handle for the request called a MessageID. This can be used to cancel the request in the event that it doesn't have a response or the response no longer matters. This can be done manually using the Cancel() method on the Conn type. The synchronous version of each send method handles this for you, but it must be done manually with the async variant. An example of the appropriate use of an async method:
Package async is a library for asynchronous programming. Since Go has already done a great job in bringing green/virtual threads into life, this library only implements a single-threaded Executor type, which some refer to as an async runtime. One can create as many executors as they like. While Go excels at forking, async, on the other hand, excels at joining. Wanted to execute pieces of code from various goroutines in a single-threaded way? An Executor is designed to be able to run tasks spawned in various goroutines sequentially. This comes in handy when one wants to do a series of operations on a single thread, for example, to read or update states that are not safe for concurrent access, to write data to the console, to update one's user interfaces, etc. No backpressure alert. Task spawning is designed not to block. If spawning outruns execution, an executor could easily consume a lot of memory over time. To mitigate, one could introduce a semaphore per hot spot. A Task can be reactive. A task is spawned with a Coroutine to take care of it. In this user-provided function, one can return a specific Result to tell a coroutine to watch and await some events (e.g. Signal, State and Memo, etc.), and the coroutine can just re-run the task whenever any of these events notifies. This is useful when one wants to do something repeatedly. It works like a loop. To exit this loop, just return a Result that ends the coroutine from within the task function. Simple. A Coroutine can also transit from one Task to another, just like a state machine can transit from one state to another. This is done by returning another specific Result from within a task function. A coroutine can transit from one task to another until a task ends it. With the ability to transit, async is able to provide more advanced control structures, like Block, Loop and Func, to ease the process of writing async code. The experience now feels similar to that of writing sync code. It's not recommended to have channel operations in an async Task for a Coroutine to do, since they tend to block. For an Executor, if one coroutine blocks, no other coroutines can run. So instead of passing data around, one would just handle data in place. One of the advantages of passing data over channels is to be able to reduce allocation. Unfortunately, async tasks always escape to heap. Any variable they captured also escapes to heap. One should always stay alert and take measures in hot spot, like repeatedly using a same task. This example demonstrates how to spawn tasks with different paths. The lower path, the higher priority. This example creates a task with path "aa" for additional computations and another task with path "zz" for printing results. The former runs before the latter because "aa" < "zz". This example demonstrates how to add a function call before a task re-runs, or after a task ends. This example demonstrates how a task can conditionally depend on a state. This example demonstrates how a memo can conditionally depend on a state. This example demonstrates how to end a task. It creates a task that prints the value of a state whenever it changes. The task only prints 0, 1, 2 and 3 because it is ended after 3. This example demonstrates how to use memos to memoize cheap computations. Memos are evaluated lazily. They take effect only when they are acquired. This example demonstrates how to set up an autorun function to run an executor in a goroutine automatically whenever a coroutine is spawned or resumed. This example demonstrates how a coroutine can transit from one task to another.
Package async is a package to provide simplistic asynchronous routines for the masses.
Package async is a package to provide simplistic asynchronous routines for the masses.
Package rpcclient implements a websocket-enabled Decred JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Decred RPC server that uses a mostly btcd/bitcoin core style Decred JSON-RPC API. This client has been tested with dcrd (https://github.com/decred/dcrd) and dcrwallet (https://github.com/decred/dcrwallet). In addition to the compatible standard HTTP POST JSON-RPC API, dcrd and dcrwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to dcrd or dcrwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by dcrd and dcrwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by dcrd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by dcrd (and dcrwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that dcrd intentionally separates the wallet functionality into a separate process named dcrwallet. This means if you are connected to the dcrd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, dcrwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *dcrjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package pipe allows to build and execute DSP pipelines. This package offers an opinionated perspective to DSP. It's based on the idea that the signal processing can have up to three stages: It implies the following constraints: Current implementation supports two execution modes: sync and async. In async mode every stage of every line is executed in by its own goroutine and channels are used to communicate between them. Sync mode allows to run one or more lines in the same goroutine. In this case lines and stages within lines are executed sequentially, as-provided. Pipe allows to use different modes in the same run. Each stage in the pipeline is implemented by components. For example, wav.Source reads signal from wav file and vst2.Processor processes signal with vst2 plugin. Components are instantiated with allocator functions: Allocator functions return component structures and pre-allocate all required resources and structures. It reduces number of allocations during pipeline execution and improves latency. Component structures consist of mutability, run closure and flush hook. Run closure is the function which will be called during the pipeline run. Flush hook is triggered when pipe is done or interruped by error or timeout. It enables to execute proper clean up logic. For mutability, refer to mutability package documentation. To run the pipeline, one first need to build it. It starts with a line definition: Line defines the order in which DSP components form the pipeline. Once line is defined, components can be bound together. It's done by creating a pipe: New executes all allocators provided by lines and binds components together into the pipe. Once pipe is built, it can be executed. To do that Start method should be called: Start will start and asynchronously run all DSP components until either any of the following things happen: the source is done; the context is done; an error in any of the components occured.