Package tview implements rich widgets for terminal based user interfaces. The widgets provided with this package are useful for data exploration and data entry. The package implements the following widgets: The package also provides Application which is used to poll the event queue and draw widgets on screen. The following is a very basic example showing a box with the title "Hello, world!": First, we create a box primitive with a border and a title. Then we create an application, set the box as its root primitive, and run the event loop. The application exits when the application's Application.Stop function is called or when Ctrl-C is pressed. You will find more demos in the "demos" subdirectory. It also contains a presentation (written using tview) which gives an overview of the different widgets and how they can be used. Throughout this package, styles are specified using the tcell.Style type. Styles specify colors with the tcell.Color type. Functions such as tcell.GetColor, tcell.NewHexColor, and tcell.NewRGBColor can be used to create colors from W3C color names or RGB values. The tcell.Style type also allows you to specify text attributes such as "bold" or "italic" or a URL which some terminals use to display hyperlinks. Almost all strings which are displayed may contain style tags. A style tag's content is always wrapped in square brackets. In its simplest form, a style tag specifies the foreground color of the text. Colors in these tags are W3C color names or six hexadecimal digits following a hash tag. Examples: A style tag changes the style of the characters following that style tag. There is no style stack and no nesting of style tags. Style tags are used in almost everything from box titles, list text, form item labels, to table cells. In a TextView, this functionality has to be switched on explicitly. See the TextView documentation for more information. A style tag's full format looks like this: Each of the four fields can be left blank and trailing fields can be omitted. (Empty square brackets "[]", however, are not considered style tags.) Fields that are not specified will be left unchanged. A field with just a dash ("-") means "reset to default". You can specify the following flags to turn on certain attributes (some flags may not be supported by your terminal): Use uppercase letters to turn off the corresponding attribute, for example, "B" to turn off bold. Uppercase letters have no effect if the attribute was not previously set. Setting a URL allows you to turn a piece of text into a hyperlink in some terminals. Specify a dash ("-") to specify the end of the hyperlink. Hyperlinks must only contain single-byte characters (e.g. ASCII) and they may not contain bracket characters ("[" or "]"). Examples: In the rare event that you want to display a string such as "[red]" or "[#00ff1a]" without applying its effect, you need to put an opening square bracket before the closing square bracket. Note that the text inside the brackets will be matched less strictly than region or colors tags. I.e. any character that may be used in color or region tags will be recognized. Examples: You can use the Escape() function to insert brackets automatically where needed. When primitives are instantiated, they are initialized with colors taken from the global Styles variable. You may change this variable to adapt the look and feel of the primitives to your preferred style. Note that most terminals will not report information about their color theme. This package therefore does not support using the terminal's color theme. The default style is a dark theme and you must change the Styles variable to switch to a light (or other) theme. This package supports all unicode characters supported by your terminal. If your terminal supports mouse events, you can enable mouse support for your application by calling Application.EnableMouse. Note that this may interfere with your terminal's default mouse behavior. Mouse support is disabled by default. Many functions in this package are not thread-safe. For many applications, this is not an issue: If your code makes changes in response to key events, the corresponding callback function will execute in the main goroutine and thus will not cause any race conditions. (Exceptions to this are documented.) If you access your primitives from other goroutines, however, you will need to synchronize execution. The easiest way to do this is to call Application.QueueUpdate or Application.QueueUpdateDraw (see the function documentation for details): One exception to this is the io.Writer interface implemented by TextView. You can safely write to a TextView from any goroutine. See the TextView documentation for details. You can also call Application.Draw from any goroutine without having to wrap it in Application.QueueUpdate. And, as mentioned above, key event callbacks are executed in the main goroutine and thus should not use Application.QueueUpdate as that may lead to deadlocks. It is also not necessary to call Application.Draw from such callbacks as it will be called automatically. All widgets listed above contain the Box type. All of Box's functions are therefore available for all widgets, too. Please note that if you are using the functions of Box on a subclass, they will return a *Box, not the subclass. This is a Golang limitation. So while tview supports method chaining in many places, these chains must be broken when using Box's functions. Example: You will need to call Box.SetBorder separately: All widgets also implement the Primitive interface. The tview package's rendering is based on version 2 of https://github.com/gdamore/tcell. It uses types and constants from that package (e.g. colors, styles, and keyboard values).
Package amqp is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the sychronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.Publish or Channel.Consume. SSL/TLS - Secure connections When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate dial your own connection. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html This exports a Session object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and `rabbitmq-server` in another. Stop & restart RabbitMQ to see how the queue reacts.
Package queue provides a lock-free queue and two-Lock concurrent queue which use the algorithm proposed by Michael and Scott. https://doi.org/10.1145/248052.248106. see pseudocode at https://www.cs.rochester.edu/research/synchronization/pseudocode/queues.html It will be refactored after go generic is released.
Package bigqueue provides embedded, fast and persistent queue written in pure Go using memory mapped file. bigqueue is currently not thread safe. To use bigqueue in parallel context, a Write lock needs to be acquired (even for Read APIs). Create or open a bigqueue: bigqueue persists the data of the queue in multiple Arenas. Each Arena is a file on disk that is mapped into memory (RAM) using mmap syscall. Default size of each Arena is set to 128MB. It is possible to create a bigqueue with custom Arena size: bigqueue also allows setting up the maximum possible memory that it can use. By default, the maximum memory is set to [3 x Arena Size]. In this case, bigqueue will never allocate more memory than `4KB*10=40KB`. This memory is above and beyond the memory used in buffers for copying data. Bigqueue allows to set periodic flush based on either elapsed time or number of mutate (enqueue/dequeue) operations. Flush syncs the in memory changes of all memory mapped files with disk. *This is a best effort flush*. Elapsed time and number of mutate operations are only checked upon an enqueue/dequeue. This is how we can set these options: In this case, a flush is done after every two mutate operations. In this case, a flush is done after one minute elapses and an Enqueue/Dequeue is called. Write to bigqueue: bigqueue allows writing string data directly, avoiding conversion to `[]byte`: Read from bigqueue: we can also read string data from bigqueue: Check whether bigqueue has non zero elements: bigqueue allows reading data from bigqueue using consumers similar to Kafka. This allows multiple consumers from reading data at different offsets (not in thread safe manner yet). The offsets of each consumer are persisted on disk and can be retrieved by creating a consumer with the same name. Data will be read from the same offset where it was left off. We can create a new consumer as follows. The offsets of a new consumer are set at the start of the queue wherever the first non-deleted element is. We can also copy an existing consumer. This will create a consumer that will have the same offsets into the queue as that of the existing consumer. Now, read operations can be performed on the consumer:
Package rabbitpubsub provides an pubsub implementation for RabbitMQ. Use OpenTopic to construct a *pubsub.Topic, and/or OpenSubscription to construct a *pubsub.Subscription. RabbitMQ follows the AMQP specification, which uses different terminology than the Go CDK Pub/Sub. A Pub/Sub topic is an AMQP exchange. The exchange kind should be "fanout" to match the Pub/Sub model, although publishing will work with any kind of exchange. A Pub/Sub subscription is an AMQP queue. The queue should be bound to the exchange that is the topic of the subscription. See the package example for details. For pubsub.OpenTopic and pubsub.OpenSubscription, rabbitpubsub registers for the scheme "rabbit". The default URL opener will connect to a default server based on the environment variable "RABBIT_SERVER_URL". To customize the URL opener, or for more details on the URL format, see URLOpener. See https://gocloud.dev/concepts/urls/ for background information. RabbitMQ supports at-least-once semantics; applications must call Message.Ack after processing a message, or it will be redelivered. See https://godoc.org/gocloud.dev/pubsub#hdr-At_most_once_and_At_least_once_Delivery for more background. rabbitpubsub exposes the following types for As:
Example_libSQL demonstrates use of River's SQLite driver with libSQL (a SQLite fork). Example_sqlite demonstrates use of River's SQLite driver.
Package redisqueue provides a producer and consumer of a queue that uses Redis streams (https://redis.io/topics/streams-intro). The features of this package include: Here's an example of a producer that inserts 1000 messages into a queue: And here's an example of a consumer that reads the messages off of that queue:
Package sqs provides the API client, operations, and parameter types for Amazon Simple Queue Service. Welcome to the Amazon SQS API Reference. Amazon SQS is a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. Amazon SQS moves data between distributed application components and helps you decouple these components. For information on the permissions you need to use this API, see Identity and access management in the Amazon SQS Developer Guide. You can use Amazon Web Services SDKs to access Amazon SQS using your favorite programming language. The SDKs perform tasks such as the following automatically: Cryptographically sign your service requests Retry requests Handle error responses Amazon SQS Product Page Making API Requests Amazon SQS Message Attributes Amazon SQS Dead-Letter Queues Amazon SQS in the Command Line Interface Regions and Endpoints
Package zap provides fast, structured, leveled logging. For applications that log in the hot path, reflection-based serialization and string formatting are prohibitively expensive - they're CPU-intensive and make many small allocations. Put differently, using json.Marshal and fmt.Fprintf to log tons of interface{} makes your application slow. Zap takes a different approach. It includes a reflection-free, zero-allocation JSON encoder, and the base Logger strives to avoid serialization overhead and allocations wherever possible. By building the high-level SugaredLogger on that foundation, zap lets users choose when they need to count every allocation and when they'd prefer a more familiar, loosely typed API. In contexts where performance is nice, but not critical, use the SugaredLogger. It's 4-10x faster than other structured logging packages and supports both structured and printf-style logging. Like log15 and go-kit, the SugaredLogger's structured logging APIs are loosely typed and accept a variadic number of key-value pairs. (For more advanced use cases, they also accept strongly typed fields - see the SugaredLogger.With documentation for details.) By default, loggers are unbuffered. However, since zap's low-level APIs allow buffering, calling Sync before letting your process exit is a good habit. In the rare contexts where every microsecond and every allocation matter, use the Logger. It's even faster than the SugaredLogger and allocates far less, but it only supports strongly-typed, structured logging. Choosing between the Logger and SugaredLogger doesn't need to be an application-wide decision: converting between the two is simple and inexpensive. The simplest way to build a Logger is to use zap's opinionated presets: NewExample, NewProduction, and NewDevelopment. These presets build a logger with a single function call: Presets are fine for small projects, but larger projects and organizations naturally require a bit more customization. For most users, zap's Config struct strikes the right balance between flexibility and convenience. See the package-level BasicConfiguration example for sample code. More unusual configurations (splitting output between files, sending logs to a message queue, etc.) are possible, but require direct use of go.uber.org/zap/zapcore. See the package-level AdvancedConfiguration example for sample code. The zap package itself is a relatively thin wrapper around the interfaces in go.uber.org/zap/zapcore. Extending zap to support a new encoding (e.g., BSON), a new log sink (e.g., Kafka), or something more exotic (perhaps an exception aggregation service, like Sentry or Rollbar) typically requires implementing the zapcore.Encoder, zapcore.WriteSyncer, or zapcore.Core interfaces. See the zapcore documentation for details. Similarly, package authors can use the high-performance Encoder and Core implementations in the zapcore package to build their own loggers. An FAQ covering everything from installation errors to design decisions is available at https://github.com/uber-go/zap/blob/master/FAQ.md.
Package pubsublite provides an easy way to publish and receive messages using the Pub/Sub Lite service. Google Pub/Sub services are designed to provide reliable, many-to-many, asynchronous messaging between applications. Publisher applications can send messages to a topic and other applications can subscribe to that topic to receive the messages. By decoupling senders and receivers, Google Pub/Sub allows developers to communicate between independently written applications. Compared to Cloud Pub/Sub, Pub/Sub Lite provides partitioned data storage with predefined throughput and storage capacity. Guidance on how to choose between Cloud Pub/Sub and Pub/Sub Lite is available at https://cloud.google.com/pubsub/docs/choosing-pubsub-or-lite. More information about Pub/Sub Lite is available at https://cloud.google.com/pubsub/lite. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Examples can be found at https://pkg.go.dev/cloud.google.com/go/pubsublite#pkg-examples and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#pkg-examples. Complete sample programs can be found at https://github.com/GoogleCloudPlatform/golang-samples/tree/master/pubsublite. The cloud.google.com/go/pubsublite/pscompat subpackage contains clients for publishing and receiving messages, which have similar interfaces to their pubsub.Topic and pubsub.Subscription counterparts in cloud.google.com/go/pubsub. The following examples demonstrate how to declare common interfaces: https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewPublisherClient-Interface and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewSubscriberClient-Interface. The following imports are required for code snippets below: Messages are published to topics. Pub/Sub Lite topics may be created like so: Close must be called to release resources when an AdminClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/topics for more information about how Pub/Sub Lite topics are configured. See https://cloud.google.com/pubsub/lite/docs/locations for the list of locations where Pub/Sub Lite is available. Pub/Sub Lite uses gRPC streams extensively for high throughput. For more differences, see https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat. To publish messages to a topic, first create a PublisherClient: Then call Publish: Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub Lite service. Thresholds for batching can be configured in PublishSettings. Publish returns a PublishResult, which behaves like a future; its Get method blocks until the message has been sent (or has failed to be sent) to the service: Once you've finishing publishing all messages, call Stop to flush all messages to the service and close gRPC streams. The PublisherClient can no longer be used after it has been stopped or has terminated due to a permanent error. PublisherClients are expected to be long-lived and used for the duration of the application, rather than for publishing small batches of messages. Stop must be called to release resources when a PublisherClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/publishing for more information about publishing. To receive messages published to a topic, create a subscription to the topic. There may be more than one subscription per topic; each message that is published to the topic will be delivered to all of its subscriptions. Pub/Sub Lite subscriptions may be created like so: See https://cloud.google.com/pubsub/lite/docs/subscriptions for more information about how subscriptions are configured. To receive messages for a subscription, first create a SubscriberClient: Messages are then consumed from a subscription via callback. The callback may be invoked concurrently by multiple goroutines (one per partition that the subscriber client is connected to). Receive blocks until either the context is canceled or a permanent error occurs. To terminate a call to Receive, cancel its context: Clients must call pubsub.Message.Ack() or pubsub.Message.Nack() for every message received. Pub/Sub Lite does not have ACK deadlines. Pub/Sub Lite also does not actually have the concept of NACK. The default behavior terminates the SubscriberClient. In Pub/Sub Lite, only a single subscriber for a given subscription is connected to any partition at a time, and there is no other client that may be able to handle messages. See https://cloud.google.com/pubsub/lite/docs/subscribing for more information about receiving messages. Pub/Sub Lite utilizes gRPC streams extensively. gRPC allows a maximum of 100 streams per connection. Internally, the library uses a default connection pool size of 8, which supports up to 800 topic partitions. To alter the connection pool size, pass a ClientOption to pscompat.NewPublisherClient and pscompat.NewSubscriberClient:
Package asynq provides a framework for Redis based distrubted task queue. Asynq uses Redis as a message broker. To connect to redis, specify the connection using one of RedisConnOpt types. The Client is used to enqueue a task. The Server is used to run the task processing workers with a given handler. Handler is an interface type with a method which takes a task and returns an error. Handler should return nil if the processing is successful, otherwise return a non-nil error. If handler panics or returns a non-nil error, the task will be retried in the future. Example of a type that implements the Handler interface.
Package workerpool queues work to a limited number of goroutines. The purpose of the worker pool is to limit the concurrency of tasks executed by the workers. This is useful when performing tasks that require sufficient resources (CPU, memory, etc.), and running too many tasks at the same time would exhaust resources. A task is a function submitted to the worker pool for execution. Submitting tasks to this worker pool will not block, regardless of the number of tasks. Incoming tasks are immediately dispatched to an available worker. If no worker is immediately available, or there are already tasks waiting for an available worker, then the task is put on a waiting queue to wait for an available worker. The intent of the worker pool is to limit the concurrency of task execution, not limit the number of tasks queued to be executed. Therefore, this unbounded input of tasks is acceptable as the tasks cannot be discarded. If the number of inbound tasks is too many to even queue for pending processing, then the solution is outside the scope of workerpool. It should be solved by distributing load over multiple systems, and/or storing input for pending processing in intermediate storage such as a database, file system, distributed message queue, etc. This worker pool uses a single dispatcher goroutine to read tasks from the input task queue and dispatch them to worker goroutines. This allows for a small input channel, and lets the dispatcher queue as many tasks as are submitted when there are no available workers. Additionally, the dispatcher can adjust the number of workers as appropriate for the work load, without having to utilize locked counters and checks incurred on task submission. When no tasks have been submitted for a period of time, a worker is removed by the dispatcher. This is done until there are no more workers to remove. The minimum number of workers is always zero, because the time to start new workers is insignificant. It is advisable to use different worker pools for tasks that are bound by different resources, or that have different resource use patterns. For example, tasks that use X Mb of memory may need different concurrency limits than tasks that use Y Mb of memory. When there are no available workers to handle incoming tasks, the tasks are put on a waiting queue, in this implementation. In implementations mentioned in the credits below, these tasks were passed to goroutines. Using a queue is faster and has less memory overhead than creating a separate goroutine for each waiting task, allowing a much higher number of waiting tasks. Also, using a waiting queue ensures that tasks are given to workers in the order the tasks were received. This implementation builds on ideas from the following: http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang http://nesv.github.io/golang/2014/02/25/worker-queues-in-go.html
Package queue provides multiple thread-safe generic queue implementations. Currently, there are 2 available implementations: A blocking queue, which provides methods that wait for the queue to have available elements when attempting to retrieve an element, and waits for a free slot when attempting to insert an element. A priority queue based on a container.Heap. The elements in the queue must implement the Lesser interface, and are ordered based on the Less method. The head of the queue is always the highest priority element. A circular queue, which is a queue that uses a fixed-size slice as if it were connected end-to-end. When the queue is full, adding a new element to the queue overwrites the oldest element. A linked queue, implemented as a singly linked list, offering O(1) time complexity for enqueue and dequeue operations. The queue maintains pointers to both the head (front) and tail (end) of the list for efficient operations without the need for traversal.
Package aprot is a Go library for building type-safe real-time APIs with automatic TypeScript client generation. It supports both WebSocket and SSE+HTTP transports. aprot follows a define-register-serve-generate workflow: The generated client includes standalone functions, React hooks (optional), typed error checking, enum const objects, and push event handlers ā all derived from your Go source code. For details on the generated TypeScript API (hook behavior, cancellation, loading states), browse the committed example output at example/react/client/src/api/ and example/vanilla/client/static/api/ in the repository. Handler methods are ordinary Go methods on a struct. They must accept context.Context as the first parameter and return either error (void) or (T, error): Parameters are positional ā each Go parameter becomes a separate argument in the TypeScript client. Parameter names are extracted from Go source via AST parsing, so the names you choose in Go are the names your TypeScript client uses. Handlers may return iter.Seq or iter.Seq2 instead of a single value to stream results incrementally. Each yielded element is delivered to the client as a separate wire message, so UIs can render rows as they arrive instead of waiting for the full response: The generator emits an AsyncIterable on the TypeScript side, so clients consume streams with a `for await` loop. Cancellation is bidirectional: breaking out of the loop (or calling the hook's cancel function) cancels the handler's context immediately, and the next `yield` returns false. Streaming is WebSocket/SSE only. Registry.RegisterREST and Registry.EnableREST panic at registration time for streaming handlers because REST cannot deliver multi-message responses over a single HTTP request. See OnStreamComplete in the Middleware section for observing the real end of a stream from logging / metrics middleware. Request struct fields can be normalized before handler dispatch using "transform" struct tags. Transforms run after JSON decoding and before struct validation, so validator rules see the cleaned value: Supported ops (applied in the order listed in the tag): Ops apply to string, *string (nil-safe), and []string fields, and the walker recurses into nested structs, *struct, and []struct so nested tags are picked up automatically. There is no registry opt-in ā a field is transformed if and only if it carries a "transform" tag. Every "transform" tag reachable from a handler's param types is statically checked at registration time via ValidateTransformTags. Unknown ops, "removeempty" on a non-[]string field, or a "transform" tag on an unsupported field type (int, bool, time.Time, ā¦) cause Registry.Register to panic when the server boots, rather than turning every request into a CodeInvalidParams response at runtime. ApplyTransforms is also exposed so the same walker can be invoked on ad-hoc values outside the handler flow. Request struct fields can declare validation rules via the "validate" struct tag, using the vocabulary from github.com/go-playground/validator. Validation is opt-in per registry: nothing happens until Registry.SetValidator is called with a StructValidator. The supplied NewPlaygroundValidator wraps the go-playground implementation and produces a structured error payload that flows through to the generated TypeScript client: Validation runs after ApplyTransforms inside HandlerInfo.Call, so rules like "required,min=1" observe the already-normalized value. Failures are returned as a ProtocolError with CodeValidationFailed and a []FieldError payload describing every rule that failed, which the generated TypeScript client exposes via its ApiError type. A Registry collects handler groups, push events, enums, and custom errors for both server dispatch and code generation: Each Registry.Register call creates a handler group with its own middleware chain and a corresponding TypeScript file. Middleware wraps handlers to add cross-cutting behavior. It follows the standard func(next) -> func pattern: Server-level middleware applies to all handlers. Per-handler middleware applies only to the handlers registered in the same Registry.Register call. Execution order: server middleware (outer) ā handler middleware (inner) ā handler. Middleware sees a streaming handler "return" as soon as its iter.Seq value is constructed ā before any items have been yielded ā so a naive `time.Since(start)` measurement logs 0ms for every stream. Call OnStreamComplete from middleware to register a callback that fires when the stream has actually terminated (via exhaustion, handler panic, or client cancellation), with the terminal error and the number of items delivered: Calling OnStreamComplete on a unary-handler context is a no-op, so the same middleware can log both streaming and unary handlers without branching on handler kind. A Server handles WebSocket upgrades, SSE streams, and HTTP POST dispatch. Mount it directly for WebSocket, or use Server.HTTPTransport for SSE+HTTP: Both transports can run simultaneously and share connection tracking ā Server.Broadcast, Server.PushToUser, and Server.ConnectionCount work across all connections regardless of transport. Handlers can additionally be exposed over REST/HTTP alongside (or instead of) WebSocket. Use Registry.RegisterREST for REST-only handlers, or Registry.EnableREST to mark an existing WebSocket handler for REST as well. NewRESTAdapter returns an http.Handler that serves every REST-exposed handler in the registry: HTTP method and path are derived from the handler method name by convention (e.g. CreateUser ā POST /users/create-user), and path parameters are mapped from the Go parameter list. Streaming handlers cannot be exposed via REST and will panic at registration ā use WebSocket or SSE for those. Use ServerOptions to configure client reconnection behavior. The server sends this configuration to clients on connect; TypeScript clients apply it automatically. Server.OnConnect and Server.OnDisconnect hooks react to connection events. OnConnect hooks can reject connections by returning an error: Each Conn has a unique ID, HTTP request info captured at connection time (via Conn.Info), and key-value storage (via Conn.Set, Conn.Get, Conn.Load) for caching authentication state or other per-connection data. Conn.SetUserID / Conn.UserID is a routing identity used for push targeting (Server.PushToUser). It is not a security boundary ā use the stored principal for authorization decisions. Push events are server-to-client messages broadcast to all connected clients or targeted to specific users: Push event types must be registered with Registry.RegisterPushEventFor. The event name on the wire is derived from the Go type name. Subscription refresh automatically pushes updated query results to clients when related data changes. Query handlers declare trigger keys with RegisterRefreshTrigger, and mutation handlers fire them with TriggerRefresh: Multiple TriggerRefresh calls within a single request are batched and deduplicated. TriggerRefreshNow flushes the queue immediately ā use it in long-running handlers that make observable state transitions over time. From background goroutines, cron jobs, webhook fan-in, or any other code path that runs outside of a request handler, use the Server.TriggerRefresh method instead ā it flushes immediately and does not require a request context: RegisterRefreshTrigger takes variadic strings that form a composite key. It is a no-op when called from a non-subscribe request. The package-level TriggerRefresh is a no-op outside a request context. Subscriptions are cleaned up automatically on client disconnect. Return ProtocolError values from handlers to send structured errors to clients. Built-in helpers cover common cases: Register Go errors with Registry.RegisterError for automatic conversion. The generated TypeScript client includes typed error checking: Register Go enum types with Registry.RegisterEnumFor or Registry.RegisterEnum to generate TypeScript const objects with full type safety. String-based enums derive names by capitalizing values; int-based enums use the String() method: Struct fields with enum types generate TypeScript fields typed as the enum union (not raw string/number). Generator reads a Registry and emits TypeScript client code. It supports two output modes: OutputVanilla (standalone functions + subscribe helpers) and OutputReact (adds React hooks with auto-refetch and mutation state): The generator creates split files: client.ts (base client), one file per handler group, and optional shared type files for types used across groups. Use NamingPlugin to customize TypeScript name conventions. Setting [GeneratorOptions.Zod] emits a companion `.schema.ts` file for every handler group whose request types carry "validate" tags. The resulting Zod schemas mirror the server-side validation rules field for field ā so the TypeScript client can reject bad input before it hits the wire, using the same constraints the server will enforce on arrival: For REST-exposed handlers, NewOpenAPIGenerator produces an OpenAPI 3.0 document describing every REST endpoint in the registry. Go doc comments on handler methods become `summary` / `description`, struct and field doc comments flow into JSON Schema descriptions, and "validate" tags become JSON Schema constraints: Use OpenAPIGenerator.WithBasePath when the API is mounted behind a proxy or at a non-root path. Several functions extract request-scoped values from context: Server.Stop rejects new connections (503), sends close frames, waits for in-flight requests to complete, and runs disconnect hooks. It is safe to call multiple times: Go types are mapped to TypeScript during generation: Messages are JSON objects with a "type" field. Client-to-server: request, cancel, subscribe, unsubscribe. Server-to-client: response, error, progress, push, config, connected (SSE only). This example shows how to set up a server with both WebSocket and SSE transports running simultaneously.
Package filiq provides a lightweight, dependency-free, high-performance in-memory worker pool. It supports both FIFO (Queue) and LIFO (Stack) processing modes with optional bounded buffering, utilizing sync.Cond for efficient signaling and optimization strategies like head-index tracking and lazy compaction to minimize memory allocations.
Package pgmq provides a Go client for Postgres Message Queue (PGMQ) v1.10.0+. It supports PostgreSQL 16, 17, and 18 with full coverage of the PGMQ SQL API including queue management, message sending/reading, metrics, and LISTEN/NOTIFY. The client works with any pgx-compatible connection type: *pgxpool.Pool, *pgx.Conn, or pgx.Tx.
Package lfq provides bounded FIFO queue implementations. The package offers multiple queue variants optimized for different producer/consumer patterns: Direct constructors (recommended for most cases): Builder API auto-selects algorithm based on constraints: All queues share the same interface for enqueueing and dequeueing: Pipeline Stage (SPSC): Event Aggregation (MPSC): Work Distribution (SPMC): Worker Pool (MPMC): Three queue flavors are available for different use cases: When to use Indirect: When to use Ptr: The builder selects algorithms based on constraints and Compact() hint: Default (FAA-based, 2n slots for capacity n): With Compact() (CAS-based, n slots for capacity n): FAA (Fetch-And-Add) scales better under high contention but requires 2n physical slots. Use Compact() when memory efficiency is critical. Type-safe builder functions enforce constraints at compile time: Compact() selects CAS-based algorithms with n physical slots (vs 2n for FAA-based default). Use when memory efficiency is more important than contention scalability: SPSC variants already use n slots (Lamport ring buffer) and ignore Compact(). Queues return ErrWouldBlock when operations cannot proceed. This error is sourced from code.hybscloud.com/iox for ecosystem consistency. For semantic error classification (delegates to iox): Capacity rounds up to the next power of 2: Minimum capacity is 2 (already a power of 2). Panic if capacity < 2. FAA-based queues (MPSC, SPMC, MPMC) use a best-effort pre-check before the atomic slot claim. Under high concurrency, up to Pā1 additional items (P = number of concurrent producers) may be transiently enqueued beyond Cap(). The 2n physical slot buffer accommodates this safely. Compact (CAS-based) variants enforce a strict capacity bound. Length is intentionally not provided because accurate counts in lock-free algorithms require expensive cross-core synchronization. Track counts in application logic when needed. All queue operations are thread-safe within their access pattern constraints: Violating these constraints (e.g., multiple producers on SPSC) causes undefined behavior including data corruption and races. FAA-based queues (MPMC, SPMC, MPSC) include a threshold mechanism to prevent livelock. This mechanism may cause Dequeue to return ErrWouldBlock even when items remain, waiting for producer activity to reset the threshold. For graceful shutdown scenarios where producers have finished but consumers need to drain remaining items, use the Drainer interface: After Drain is called, Dequeue skips threshold checks, allowing consumers to fully drain the queue. Drain is a hint ā the caller must ensure no further Enqueue calls will be made. SPSC queues do not implement Drainer as they have no threshold mechanism. The type assertion naturally handles this case. Go's race detector is not designed for lock-free algorithm verification. The race detector tracks explicit synchronization primitives (mutex, channels, WaitGroup) but cannot observe happens-before relationships established through atomic memory orderings (acquire-release semantics). Lock-free queues use sequence numbers with acquire-release semantics to protect non-atomic data fields. These algorithms are correct, but the race detector may report false positives because it cannot track synchronization provided by atomic operations on separate variables. For lock-free algorithm correctness verification, use: Tests incompatible with race detection are excluded via //go:build !race. This package uses code.hybscloud.com/iox for semantic errors, code.hybscloud.com/atomix for atomic primitives with explicit memory ordering, and code.hybscloud.com/spin for CPU pause instructions. Example_backpressure demonstrates handling backpressure with a full queue. Example_batchProcessing demonstrates collecting items into batches. Example_bufferPool demonstrates using Indirect queue for buffer pool management. Example_compactMode demonstrates compact mode for memory-constrained scenarios. Example_pipeline demonstrates a multi-stage pipeline using SPSC queues. Example_workerPool demonstrates a worker pool pattern using MPMC.
Package gokyu provides a cloud-agnostic message queue abstraction using AMQP 1.0 protocol for communication with various cloud providers. gokyu allows you to write messaging code once and run it against different cloud message brokers (Azure Service Bus, Amazon MQ) without changing your application code. This reduces vendor lock-in and makes it easy to switch providers or run in multi-cloud environments. Import the package and at least one provider: Create a client and start publishing/subscribing: To switch from Azure to Amazon MQ, simply change the Provider and ConnectionString in your configuration. Your business logic remains unchanged.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, and PushGetHeadersMsg functions are provided as a convenience. While it is of course possible to create and send these messages manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Finally, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package dlq provides dead-letter queue management and reprocessing capabilities. Dead-letter queues store messages that failed processing after all retries. This package provides: The DLQ pattern is essential for handling message processing failures: The package provides:
Package gpucontext provides shared GPU infrastructure for the gogpu ecosystem. This package defines interfaces and utilities used across multiple gogpu projects to enable GPU resource sharing without circular dependencies: This package follows the wgpu ecosystem pattern where shared types are separated from implementation (cf. wgpu-types in Rust). The key insight is that GPU context (device + queue + related state) is a universal concept across Vulkan, CUDA, OpenGL, and WebGPU. By defining a minimal interface here, different packages can share GPU resources without depending on each other. Reference: https://github.com/gogpu/gpucontext
Package qwr provides serialised writes and concurrent reads for SQLite databases. qwr (Query Write Reader) uses a worker pool pattern with a single writer to sequentially queue writes to SQLite while allowing concurrent read operations through a configurable connection pool. qwr provides several write modes: Example writes: Reads use the connection pool and can be executed concurrently: qwr supports SQLite's ATTACH DATABASE for working with multiple database files through a single manager. Attached databases share the main connection pool and write serialiser, enabling cross-database queries and atomic transactions across databases. Attach databases at construction time via the builder: Or at runtime via the manager: Queries reference attached databases using the schema-qualified syntax: Always use schema-qualified table names for attached databases (e.g. analytics.events, not just events). Unqualified names resolve to the main database. Bare :memory: paths are rejected because each pooled connection would get its own isolated in-memory database. Use file::memory:?cache=shared for a shared in-memory attached database. Do not use [Query.Prepared] for schema-qualified queries before the schema is attached - the preparation will fail on every call until the schema exists. For parallel writes to independent databases, use separate Manager instances rather than ATTACH. Attached databases share a single serialised writer, which is correct for cross-database transactions but does not offer write parallelism. Attach is not supported with NewSQL because qwr cannot control connection creation for user-provided database handles. qwr emits structured events for all significant operations. Subscribe to receive events for logging, metrics, tracing, or alerting: Use filters to receive only specific event types: Database profiles configure connection pools and SQLite PRAGMA settings. Pre-configured profiles are available in the profile subpackage: Async operations capture errors in an error queue with automatic retry support for transient failures. Errors are classified by type to determine appropriate retry strategies. qwr uses modernc.org/sqlite by default. Use NewSQL to provide your own database connections with a different driver. Note that NewSQL does not support Attach - manage ATTACH statements on your own connections.
Package lasr implements a persistent message queue backed by BoltDB. This queue is useful when the producers and consumers can live in the same process. Goals: * Data integrity over performance. * Simplicity over complexity. * Ease of use. * Minimal feature set. lasr is designed to never lose information. When the Send method completes, messages have been safely written to disk. On Receive, messages are not deleted until Ack is called. Users should make sure they always respond to messages with Ack or Nack. Dead-lettering is supported, but disabled by default.