Package gocent is a Go language client for Centrifugo real-time messaging server HTTP API.
RCat is a lightweight tool for real-time file concatenation. Usage:
Package unifrost is a go module for relaying pubsub messages to the web using SSE(Eventsource). It is based on Twitter's implementation for real-time event-streaming in their new web app. Supported brokers For examples check https://github.com/unifrost/unifrost/tree/master/examples/
Package socket provides a Socket.IO client implementation in Go. It enables real-time, bidirectional event-based communication between web clients and servers. Example usage:
Package engine implements the Engine.IO client transport layer. Package engine implements a client-side Engine.IO transport layer. It provides real-time bidirectional communication between clients and servers using various transport mechanisms including WebSocket, HTTP long-polling, and WebTransport. The package supports automatic transport upgrade, binary data transmission, and reconnection handling. It is designed to be the foundation for higher-level protocols like Socket.IO.
Package serial provides a minimal, Linux-only serial port reader designed for high-frequency unbuffered communication with embedded devices. This package is optimized for real-time use cases such as scientific instrumentation (e.g., seismometers), where data arrives with high frequency (e.g., 200Hz) and must be read as soon as newline-delimited lines are available. Features: This package does **not** support Windows. Example usage:
Package exchange-connector provides a unified interface for interacting with cryptocurrency exchanges. The library offers a consistent API that abstracts away exchange-specific implementation details, allowing applications to work with multiple exchange platforms through a standardized interface. Core Features: The library is built around the ExchangeConnector interface which defines the methods for interacting with exchanges, including REST API for historical data and WebSocket connections for real-time streaming. The library defines standardized errors to provide consistent error handling across different exchange implementations: ErrNotConnected: Returned when an operation is attempted on a connector that hasn't been connected yet or has lost connection ErrInvalidSymbol: Returned when an invalid trading pair symbol is provided ErrInvalidInterval: Returned when an unsupported time interval is provided ErrInvalidTimeRange: Returned when an invalid time range is provided (e.g., end time before start time) ErrRateLimitExceeded: Returned when the exchange rate limit is exceeded ErrAuthenticationRequired: Returned when attempting an operation that requires authentication without providing credentials ErrInvalidCredentials: Returned when the provided API credentials are invalid ErrSubscriptionFailed: Returned when a WebSocket subscription cannot be established ErrSubscriptionNotFound: Returned when trying to unsubscribe from a non-existent subscription ErrExchangeUnavailable: Returned when the exchange API is unavailable Additionally, the library provides a MarketError type for market-specific error conditions, which can be created using NewMarketError(symbol, message, err). Basic usage: Option 1: Using method chaining for credential setup: Option 2: Direct credential setup: Getting historical candle data: Getting daily candles for longer-term analysis: Subscribing to real-time candle updates: Order book subscription: The library provides concrete implementations for various exchanges while maintaining a consistent interface, enabling applications to easily switch between or support multiple exchanges simultaneously.
Package hnapi provides a Go SDK for interacting with the Hacker News API. This package offers a complete interface to the Hacker News API (powered by Firebase) including support for stories, comments, jobs, Ask HNs, Show HNs, polls, and user profiles. It also includes a real-time update mechanism that streams changes using Go channels. The package supports retrieving items (stories, comments, etc.), user profiles, various lists (top stories, new stories, etc.), and provides helper functions for batch retrieval and real-time updates. ExampleClient demonstrates the basic usage of the Hacker News API client. This example doesn't run automatically because it makes real API calls.
Package api provides a Go client for interacting with the Todoist API v1. Package todoist implements a user-friendly client for the Todoist API, offering a simplified interface for interacting with various resources. It supports using a custom handler to manage sync tokens and handle resource operations, such as storing API responses. Package ws implements a WebSocket client for receiving real-time sync notifications from the Todoist server. It is useful for enabling automatic background synchronization of tasks and updates.
Package sns provides the client and types for making API requests to Amazon Simple Notification Service. Amazon Simple Notification Service (Amazon SNS) is a web service that enables you to build distributed web-enabled applications. Applications can use Amazon SNS to easily push real-time notification messages to interested subscribers over multiple delivery protocols. For more information about this product see http://aws.amazon.com/sns (http://aws.amazon.com/sns/). For detailed information about Amazon SNS features and their associated API calls, see the Amazon SNS Developer Guide (http://docs.aws.amazon.com/sns/latest/dg/). We also provide SDKs that enable you to access Amazon SNS from your preferred programming language. The SDKs contain functionality that automatically takes care of tasks such as: cryptographically signing your service requests, retrying requests, and handling error responses. For a list of available SDKs, go to Tools for Amazon Web Services (http://aws.amazon.com/tools/). See https://docs.aws.amazon.com/goto/WebAPI/sns-2010-03-31 for more information on this service. See sns package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/sns/ To Amazon Simple Notification Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Simple Notification Service client SNS for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/sns/#New
Package dspy is a Go implementation of the DSPy framework for using language models to solve complex tasks through composable steps and prompting techniques. DSPy-Go provides a collection of modules, optimizers, and tools for building reliable LLM-powered applications. It focuses on making it easy to: Key Components: Core: Fundamental abstractions like Module, Signature, LLM and Program for defining and executing LLM-based workflows. Modules: Building blocks for composing LLM workflows: Predict: Basic prediction module for simple LLM interactions ChainOfThought: Implements step-by-step reasoning with rationale tracking ReAct: Implements Reasoning and Acting with tool integration Refine: Quality improvement through multiple attempts with reward functions and temperature variation Parallel: Concurrent execution wrapper for batch processing with any module MultiChainComparison: Compares multiple reasoning attempts and synthesizes holistic evaluation Optimizers: Tools for improving prompt effectiveness: BootstrapFewShot: Automatically selects high-quality examples for few-shot learning MIPRO: Multi-step interactive prompt optimization Copro: Collaborative prompt optimization SIMBA: Stochastic Introspective Mini-Batch Ascent with self-analysis GEPA: Generative Evolutionary Prompt Adaptation with multi-objective Pareto optimization, LLM-based self-reflection, semantic diversity metrics, and elite archive management TPE: Tree-structured Parzen Estimator for Bayesian optimization Agents: Advanced patterns for building sophisticated AI systems: Memory: Different memory implementations for tracking conversation history Tools: Integration with external tools and APIs, including: Smart Tool Registry: Intelligent tool selection using Bayesian inference Performance Tracking: Real-time metrics and reliability scoring Auto-Discovery: Dynamic tool registration from MCP servers MCP (Model Context Protocol) support for seamless integrations Tool Chaining: Sequential execution of tools in pipelines with data transformation Tool Composition: Combining multiple tools into reusable composite units Parallel Execution: Advanced parallel tool execution with intelligent scheduling Dependency Resolution: Automatic execution planning based on tool dependencies Workflows: Chain: Sequential execution of steps Parallel: Concurrent execution of multiple workflow steps Router: Dynamic routing based on classification Advanced Patterns: ForEach, While, Until loops with conditional execution Orchestrator: Flexible task decomposition and execution Integration with multiple LLM providers: Anthropic Claude Google Gemini (with multimodal support) OpenAI (with flexible configuration for compatible APIs) Ollama LlamaCPP Multimodal Capabilities: Image Analysis: Analyze and describe images with natural language Vision Question Answering: Ask specific questions about visual content Multimodal Chat: Interactive conversations with images Streaming Multimodal: Real-time processing of multimodal content Multiple Image Analysis: Compare and analyze multiple images simultaneously Content Block System: Flexible handling of text, image, and future audio content Simple Example: OpenAI-Compatible API Example: Multimodal Example: GEPA Optimizer Example: Advanced Features: Tracing and Logging: Detailed tracing and structured logging for debugging and optimization Execution context is tracked and passed through the pipeline for debugging and analysis. Error Handling: Comprehensive error management with custom error types and centralized handling Metric-Based Optimization: Improve module performance based on custom evaluation metrics Smart Tool Management: Intelligent tool selection, performance tracking, auto-discovery, chaining, and composition for building complex tool workflows Custom Tool Integration: Extend ReAct modules with domain-specific tools Workflow Retry Logic: Resilient execution with configurable retry mechanisms and backoff strategies Streaming Support: Process LLM outputs incrementally as they're generated Text streaming for regular LLM interactions Multimodal streaming for image analysis and vision tasks Data Storage: Integration with various storage backends for persistence of examples and results Dataset Management: Built-in support for downloading and managing datasets like GSM8K and HotPotQA Arrow Support: Integration with Apache Arrow for efficient data handling and processing Working with Smart Tool Registry: Working with Tool Chaining and Composition: Working with Multimodal Streaming: Working with Workflows: For more examples and detailed documentation, visit: https://github.com/XiaoConstantine/dspy-go DSPy-Go is released under the MIT License.
Package godnsbl lets you perform RBL (Real-time Blackhole List - https://en.wikipedia.org/wiki/DNSBL) lookups using Golang JSON annotations on the types are provided as a convenience.
Package partnercentralselling provides the API client, operations, and parameter types for Partner Central Selling API. This Amazon Web Services (AWS) Partner Central API reference is designed to help AWS Partnersintegrate Customer Relationship Management (CRM) systems with AWS Partner Central. Partners can automate interactions with AWS Partner Central, which helps to ensure effective engagements in joint business activities. The API provides standard AWS API functionality. Access it by either using API Actions or by using an AWS SDK that's tailored to your programming language or platform. For more information, see Getting Started with AWSand Tools to Build on AWS. Features offered by AWS Partner Central API Opportunity management: Manages coselling opportunities through API actions such as CreateOpportunity , UpdateOpportunity , ListOpportunities , GetOpportunity , and AssignOpportunity . AWS referral management: Manages referrals shared by AWS using actions such as ListEngagementInvitations , GetEngagementInvitation , StartEngagementByAcceptingInvitation , and RejectEngagementInvitation . Entity association: Associates related entities such as AWS Products, Partner Solutions, and AWS Marketplace Private Offers with opportunities using the actions AssociateOpportunity , and DisassociateOpportunity . View AWS opportunity details: Retrieves real-time summaries of AWS opportunities using the GetAWSOpportunitySummary action. List solutions: Provides list APIs for listing partner offers using ListSolutions . Event subscription: Subscribe to real-time opportunity updates through AWS EventBridge by using actions such as Opportunity Created, Opportunity Updated, Engagement Invitation Accepted, Engagement Invitation Rejected, and Engagement Invitation Created.
Package bugsnag captures errors in real-time and reports them to BugSnag (http://bugsnag.com). Using bugsnag-go is a three-step process. 1. As early as possible in your program configure the notifier with your APIKey. This sets up handling of panics that would otherwise crash your app. 2. Add bugsnag to places that already catch panics. For example you should add it to the HTTP server when you call ListenAndServer: If that's not possible, you can also wrap each HTTP handler manually: 3. To notify BugSnag of an error that is not a panic, pass it to bugsnag.Notify. This will also log the error message using the configured Logger. For detailed integration instructions see https://docs.bugsnag.com/platforms/go. The only required configuration is the BugSnag API key which can be obtained by clicking "Project Settings" on the top of your BugSnag dashboard after signing up. We also recommend you set the ReleaseStage, AppType, and AppVersion if these make sense for your deployment workflow. If you need to attach extra data to BugSnag events, you can do that using the rawData mechanism. Most of the functions that send errors to BugSnag allow you to pass in any number of interface{} values as rawData. The rawData can consist of the Severity, Context, User or MetaData types listed below, and there is also builtin support for *http.Requests. If you want to add custom tabs to your bugsnag dashboard you can pass any value in as rawData, and then process it into the event's metadata using a bugsnag.OnBeforeNotify() hook. If necessary you can pass Configuration in as rawData, or modify the Configuration object passed into OnBeforeNotify hooks. Configuration passed in this way only affects the current notification.
Package srtp implements Secure Real-time Transport Protocol
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package sns provides the client and types for making API requests to Amazon Simple Notification Service. Amazon Simple Notification Service (Amazon SNS) is a web service that enables you to build distributed web-enabled applications. Applications can use Amazon SNS to easily push real-time notification messages to interested subscribers over multiple delivery protocols. For more information about this product see http://aws.amazon.com/sns (http://aws.amazon.com/sns/). For detailed information about Amazon SNS features and their associated API calls, see the Amazon SNS Developer Guide (http://docs.aws.amazon.com/sns/latest/dg/). We also provide SDKs that enable you to access Amazon SNS from your preferred programming language. The SDKs contain functionality that automatically takes care of tasks such as: cryptographically signing your service requests, retrying requests, and handling error responses. For a list of available SDKs, go to Tools for Amazon Web Services (http://aws.amazon.com/tools/). See https://docs.aws.amazon.com/goto/WebAPI/sns-2010-03-31 for more information on this service. See sns package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/sns/ To Amazon Simple Notification Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Simple Notification Service client SNS for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/sns/#New
Package esalert a simple framework for real-time alerts on data in Elasticsearch.
Package lunk provides a set of tools for structured logging in the style of Google's Dapper or Twitter's Zipkin. When we consider a complex event in a distributed system, we're actually considering a partially-ordered tree of events from various services, libraries, and modules. Consider a user-initiated web request. Their browser sends an HTTP request to an edge server, which extracts the credentials (e.g., OAuth token) and authenticates the request by communicating with an internal authentication service, which returns a signed set of internal credentials (e.g., signed user ID). The edge web server then proxies the request to a cluster of web servers, each running a PHP application. The PHP application loads some data from several databases, places the user in a number of treatment groups for running A/B experiments, writes some data to a Dynamo-style distributed database, and returns an HTML response. The edge server receives this response and proxies it to the user's browser. In this scenario we have a number of infrastructure-specific events: This scenario also involves a number of events which have little to do with the infrastructure, but are still critical information for the business the system supports: There are a number of different teams all trying to monitor and improve aspects of this system. Operational staff need to know if a particular host or service is experiencing a latency spike or drop in throughput. Development staff need to know if their application's response times have gone down as a result of a recent deploy. Customer support staff need to know if the system is operating nominally as a whole, and for customers in particular. Product designers and managers need to know the effect of an A/B test on user behavior. But the fact that these teams will be consuming the data in different ways for different purposes does mean that they are working on different systems. In order to instrument the various components of the system, we need a common data model. We adopt Dapper's notion of a tree to mean a partially-ordered tree of events from a distributed system. A tree in Lunk is identified by its root ID, which is the unique ID of its root event. All events in a common tree share a root ID. In our photo example, we would assign a unique root ID as soon as the edge server received the request. Events inside a tree are causally ordered: each event has a unique ID, and an optional parent ID. By passing the IDs across systems, we establish causal ordering between events. In our photo example, the two database queries from the app would share the same parent ID--the ID of the event corresponding to the app handling the request which caused those queries. Each event has a schema of properties, which allow us to record specific pieces of information about each event. For HTTP requests, we can record the method, the request URI, the elapsed time to handle the request, etc. Lunk is agnostic in terms of aggregation technologies, but two use cases seem clear: real-time process monitoring and offline causational analysis. For real-time process monitoring, events can be streamed to a aggregation service like Riemann (http://riemann.io) or Storm (http://storm.incubator.apache.org), which can calculate process statistics (e.g., the 95th percentile latency for the edge server responses) in real-time. This allows for adaptive monitoring of all services, with the option of including example root IDs in the alerts (e.g., 95th percentile latency is over 300ms, mostly as a result of requests like those in tree XXXXX). For offline causational analysis, events can be written in batches to batch processing systems like Hadoop or OLAP databases like Vertica. These aggregates can be queried to answer questions traditionally reserved for A/B testing systems. "Did users who were show the new navbar view more photos?" "Did the new image optimization algorithm we enabled for 1% of views run faster? Did it produce smaller images? Did it have any effect on user engagement?" "Did any services have increased exception rates after any recent deploys?" &tc &tc By capturing the root ID of a particular web request, we can assemble a partially-ordered tree of events which were involved in the handling of that request. All events with a common root ID are in a common tree, which allows for O(M) retrieval for a tree of M events. To send a request with a root ID and a parent ID, use the Event-ID HTTP header: The header value is simply the root ID and event ID, hex-encoded and separated with a slash. If the event has a parent ID, that may be included as an optional third parameter. A server that receives a request with this header can use this to properly parent its own events. Each event has a set of named properties, the keys and values of which are strings. This allows aggregation layers to take advantage of simplifying assumptions and either store events in normalized form (with event data separate from property data) or in denormalized form (essentially pre-materializing an outer join of the normalized relations). Durations are always recorded as fractional milliseconds. Lunk currently provides two formats for log entries: text and JSON. Text-based logs encode each entry as a single line of text, using key="value" formatting for all properties. Event property keys are scoped to avoid collisions. JSON logs encode each entry as a single JSON object.
shortinette is the core framework for managing and automating the process of grading coding bootcamps (Shorts). It provides a comprehensive set of tools for running and testing student submissions across various programming languages. The shortinette package is composed of several sub-packages, each responsible for a specific aspect of the grading pipeline: `logger`: Handles logging for the framework, including general informational messages, error reporting, and trace logging for feedback on individual submissions. This package ensures that all important events and errors are captured for debugging and auditing purposes. `requirements`: Validates the necessary environment variables and dependencies required by the framework. This includes checking for essential configuration values in a `.env` file and ensuring that all necessary tools (e.g., Docker images) are available before grading begins. `testutils`: Provides utility functions for compiling and running code submissions. This includes functions for compiling Rust code, running executables with various options (such as timeouts and real-time output), and manipulating files. The utility functions are designed to handle the intricacies of running untrusted student code safely and efficiently. `git`: Manages interactions with GitHub, including cloning repositories, managing collaborators, and uploading files. This package abstracts the GitHub API to simplify common tasks such as adding collaborators to repositories, creating branches, and pushing code or data to specific locations in a repository. `exercise`: Defines the structure and behavior of individual coding exercises. This includes specifying the files that students are allowed to submit, the expected output, and the functions to be tested. The `exercise` package provides the framework for setting up exercises, running tests, and reporting results. `module`: Organizes exercises into modules, allowing for the grouping of related exercises into a coherent curriculum. The `module` package handles the execution of all exercises within a module, aggregates results, and manages the overall grading process. `webhook`: Enables automatic grading triggered by GitHub webhooks. This allows for a fully automated workflow where student submissions are graded as soon as they are pushed to a specific branch in a GitHub repository. `short`: The central orchestrator of the grading process, integrating all sub-packages into a cohesive system. The `short` package handles the setup and teardown of grading environments, manages the execution of modules and exercises, and ensures that all results are properly recorded and reported.
Gor is simple http traffic replication tool written in Go. Its main goal to replay traffic from production servers to staging and dev environments. Now you can test your code on real user sessions in an automated and repeatable fashion. Gor consists of 2 parts: listener and replay servers. Listener catch http traffic from given port in real-time and send it to replay server via UDP. Replay server forwards traffic to given address.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.