Package scheduler provides job scheduling and task execution capabilities for the modular framework. This module implements a flexible job scheduler that supports both immediate and scheduled job execution, configurable worker pools, job persistence, and comprehensive job lifecycle management. It's designed for reliable background task processing in web applications and services. The scheduler module provides the following capabilities: The module registers a scheduler service for dependency injection: Basic job scheduling: Job with custom options:
Package eventbus provides a flexible event-driven messaging system for the modular framework. This module enables decoupled communication between application components through an event bus pattern. It supports both synchronous and asynchronous event processing, multiple event bus engines, and configurable event handling strategies. The eventbus module offers the following capabilities: The module can be configured through the EventBusConfig structure: The module registers itself as a service for dependency injection: Basic event publishing: Event subscription patterns: Subscription management: The module supports different event processing patterns: **Synchronous Processing**: Events are processed immediately in the same goroutine that published them. Best for lightweight operations and when ordering is important. **Asynchronous Processing**: Events are queued and processed by worker goroutines. Best for heavy operations, external API calls, or when you don't want to block the publisher. Currently supported engines:
Package siris is a fully-featured HTTP/2 backend web framework written entirely in Google’s Go Language. Source code and other details for the project are available at GitHub: The only requirement is the Go Programming Language, at least version 1.8 Example code: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, Siris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: Siris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of Siris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known paramete and custom http errors, now it's time to see wildcard parameters and macros. Siris, like net/http std package registers route's handlers by a Handler, the Siris' type of handler is just a func(ctx context.Context) where context comes from github.com/go-siris/siris/context. Until go 1.9 you will have to import that package too, after go 1.9 this will be not be necessary. Siris has the easiest and the most powerful routing process you have ever meet. At the same time, Siris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, I am calling them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that Siris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/go-siris/siris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: Siris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Siris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context.ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/go-siris/siris/tree/master/_examples/intermediate/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Each one of these template engines has different options located here: https://github.com/go-siris/siris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Built'n Middleware: Community Middleware: Home Page:
Package siris is a fully-featured HTTP/2 backend web framework written entirely in Google’s Go Language. Source code and other details for the project are available at GitHub: The only requirement is the Go Programming Language, at least version 1.8 Example code: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, Siris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: Siris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of Siris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known paramete and custom http errors, now it's time to see wildcard parameters and macros. Siris, like net/http std package registers route's handlers by a Handler, the Siris' type of handler is just a func(ctx context.Context) where context comes from github.com/go-siris/siris/context. Until go 1.9 you will have to import that package too, after go 1.9 this will be not be necessary. Siris has the easiest and the most powerful routing process you have ever meet. At the same time, Siris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, I am calling them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that Siris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/go-siris/siris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: Siris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Siris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context.ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/go-siris/siris/tree/master/_examples/intermediate/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Each one of these template engines has different options located here: https://github.com/go-siris/siris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Built'n Middleware: Community Middleware: Home Page:
Package rollbar is a Golang Rollbar client that makes it easy to report errors to Rollbar with full stacktraces. Basic Usage This package is designed to be used via the functions exposed at the root of the `rollbar` package. These work by managing a single instance of the `Client` type that is configurable via the setter functions at the root of the package. If you wish for more fine grained control over the client or you wish to have multiple independent clients then you can create and manage your own instances of the `Client` type. We provide two implementations of the `Transport` interface, `AsyncTransport` and `SyncTransport`. These manage the communication with the network layer. The Async version uses a buffered channel to communicate with the Rollbar API in a separate go routine. The Sync version is fully synchronous. It is possible to create your own `Transport` and configure a Client to use your preferred implementation. Go does not provide a mechanism for handling all panics automatically, therefore we provide two functions `Wrap` and `WrapAndWait` to make working with panics easier. They both take a function and then report to Rollbar if that function panics. They use the recover mechanism to capture the panic, and therefore if you wish your process to have the normal behaviour on panic (i.e. to crash), you will need to re-panic the result of calling `Wrap`. For example, The above pattern of calling `Wrap(...)` and then `Wait(...)` can be combined via `WrapAndWait(...)`. When `WrapAndWait(...)` returns if there was a panic it has already been sent to the Rollbar API. The error is still returned by this function if there is one. Due to the nature of the `error` type in Go, it can be difficult to attribute errors to their original origin without doing some extra work. To account for this, we define the interface `CauseStacker`: One can implement this interface for custom Error types to be able to build up a chain of stack traces. In order to get stack the correct stacks, callers must call BuildStack on their own at the time that the cause is wrapped. This is the least intrusive mechanism for gathering this information due to the decisions made by the Go runtime to not track this information.
Package fm provides a pure Go wrapper around macOS Foundation Models framework. Foundation Models is Apple's on-device large language model framework introduced in macOS 26 Tahoe, providing privacy-focused AI capabilities without requiring internet connectivity. • Streaming-first text generation with LanguageModelSession • Simulated real-time response streaming with word/sentence chunks • Dynamic tool calling with custom Go tools and input validation • Structured output generation with JSON formatting • Context window management (4096 token limit) • Context cancellation and timeout support • Session lifecycle management with proper memory handling • System instructions support • Generation options for temperature, max tokens, and other parameters • Structured logging with Go slog integration for comprehensive debugging • macOS 26 Tahoe or later • Apple Intelligence enabled • Compatible Apple Silicon device Create a session and generate text: Control output with GenerationOptions: Create a session with specific behavior: Foundation Models has a strict 4096 token context window. Monitor usage: Define custom tools that the model can call: Add validation to your tools for better error handling: Register and use tools: Generate structured JSON responses: Cancel long-running requests with context support: Generate responses with simulated real-time streaming output: Note: Current streaming implementation is simulated (breaks complete response into chunks). Native streaming will be implemented when Foundation Models provides streaming APIs. Check if Foundation Models is available: The package provides comprehensive error handling: Always release sessions to prevent memory leaks: • Foundation Models runs entirely on-device • No internet connection required • Processing time depends on prompt complexity and device capabilities • Context window is limited to 4096 tokens • Token estimation is approximate (4 chars per token) • Use context cancellation for long-running requests • Input validation prevents runtime errors and improves performance The package is not thread-safe. Use appropriate synchronization when accessing sessions from multiple goroutines. Context cancellation is goroutine-safe and can be used from any goroutine. This package automatically manages the Swift shim library (libFMShim.dylib) that bridges Foundation Models APIs to C functions callable from Go via purego. The library search strategy: 1. Look for existing libFMShim.dylib in current directory and common paths 2. If not found, automatically extract embedded library to temp directory 3. Load the library and initialize the Foundation Models interface No manual setup required - the package is fully self-contained! • Foundation Models API is still evolving • Some advanced GenerationOptions may not be fully supported yet • Foundation Models tool invocation can be inconsistent due to safety restrictions • Context cancellation cannot interrupt actual model computation • Streaming is currently simulated (post-processing) - native streaming pending Apple API support • macOS 26 Tahoe only ✅ **What Works:** • Tool registration and parameter definition • Swift ↔ Go callback mechanism • Real data fetching (weather, calculations, etc.) • Error handling and validation • Structured logging with Go slog integration ⚠️ **Foundation Models Behavior:** • Tool calling works but can be inconsistent • Some queries may be blocked by safety guardrails • Success rate varies by tool complexity and phrasing The package provides comprehensive debug logging through Go's slog package: Debug logs include: • Session creation and configuration details • Tool registration and parameter validation • Request/response processing with timing • Context usage and memory management • Swift shim layer interaction details See LICENSE file for details. Package fm provides a pure Go wrapper around macOS Foundation Models framework using purego to call a Swift shim library that exports C functions. Foundation Models (macOS 26 Tahoe) provides on-device LLM capabilities including: - Text generation with LanguageModelSession - Streaming responses via delegates or async sequences - Tool calling with requestToolInvocation:with: - Structured outputs with LanguageModelRequestOptions IMPORTANT: Foundation Models has a strict 4096 token context window limit. This package automatically tracks context usage and validates requests to prevent exceeding the limit. Use GetContextSize(), IsContextNearLimit(), and RefreshSession() to manage long conversations. This implementation uses a Swift shim (libFMShim.dylib) that exports C functions using @_cdecl to bridge Swift async methods to synchronous C calls.
Package numpool provides a distributed resource pool implementation backed by PostgreSQL. It allows multiple processes to share a finite set of resources with automatic blocking when resources are unavailable and fair distribution using a wait queue. The pool uses PostgreSQL's transactional guarantees and LISTEN/NOTIFY mechanism to ensure safe concurrent access and efficient resource allocation across multiple application instances. Setup: Before using numpool, you need to set up the required database table and get a manager: Basic usage: Lifecycle management:
Package golangsdk provides a multi-vendor interface to OpenStack-compatible clouds. The library has a three-level hierarchy: providers, services, and resources. Provider structs represent the cloud providers that offer and manage a collection of services. You will generally want to create one Provider client per OpenStack cloud. Use your OpenStack credentials to create a Provider client. The IdentityEndpoint is typically refered to as "auth_url" or "OS_AUTH_URL" in information provided by the cloud operator. Additionally, the cloud may refer to TenantID or TenantName as project_id and project_name. Credentials are specified like so: You may also use the openstack.AuthOptionsFromEnv() helper function. This function reads in standard environment variables frequently found in an OpenStack `openrc` file. Again note that Gophercloud currently uses "tenant" instead of "project". Service structs are specific to a provider and handle all of the logic and operations for a particular OpenStack service. Examples of services include: Compute, Object Storage, Block Storage. In order to define one, you need to pass in the parent provider, like so: Resource structs are the domain models that services make use of in order to work with and represent the state of API resources: Intermediate Result structs are returned for API operations, which allow generic access to the HTTP headers, response body, and any errors associated with the network transaction. To turn a result into a usable resource struct, you must call the Extract method which is chained to the response, or an Extract function from an applicable extension: All requests that enumerate a collection return a Pager struct that is used to iterate through the results one page at a time. Use the EachPage method on that Pager to handle each successive Page in a closure, then use the appropriate extraction method from that request's package to interpret that Page as a slice of results: If you want to obtain the entire collection of pages without doing any intermediary processing on each page, you can use the AllPages method: This top-level package contains utility functions and data types that are used throughout the provider and service packages. Of particular note for end users are the AuthOptions and EndpointOpts structs.
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt or loopback. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend (either an HTTP endpoint or a shunt) The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package virtcontainers manages hardware virtualized containers. Each container belongs to a set of containers sharing the same networking namespace and storage, also known as a pod. Virtcontainers pods are hardware virtualized, i.e. they run on virtual machines. Virtcontainers will create one VM per pod, and containers will be created as processes within the pod VM. The virtcontainers package manages both pods and containers lifecycles. This example creates and starts a single container pod, using qemu as the hypervisor and hyperstart as the VM agent.
Package paymentcryptographydata provides the API client, operations, and parameter types for Payment Cryptography Data Plane. You use the Amazon Web Services Payment Cryptography Data Plane to manage how encryption keys are used for payment-related transaction processing and associated cryptographic operations. You can encrypt, decrypt, generate, verify, and translate payment-related cryptographic operations in Amazon Web Services Payment Cryptography. For more information, see Data operationsin the Amazon Web Services Payment Cryptography User Guide. To manage your encryption keys, you use the Amazon Web Services Payment Cryptography Control Plane. You can create, import, export, share, manage, and delete keys. You can also manage Identity and Access Management (IAM) policies for keys.
Package paymentcryptography provides the API client, operations, and parameter types for Payment Cryptography Control Plane. Amazon Web Services Payment Cryptography Control Plane APIs manage encryption keys for use during payment-related cryptographic operations. You can create, import, export, share, manage, and delete keys. You can also manage Identity and Access Management (IAM) policies for keys. For more information, see Identity and access managementin the Amazon Web Services Payment Cryptography User Guide. To use encryption keys for payment-related transaction processing and associated cryptographic operations, you use the Amazon Web Services Payment Cryptography Data Plane. You can perform actions like encrypt, decrypt, generate, and verify payment-related data. All Amazon Web Services Payment Cryptography API calls must be signed and transmitted using Transport Layer Security (TLS). We recommend you always use the latest supported TLS version for logging API requests. Amazon Web Services Payment Cryptography supports CloudTrail for control plane operations, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket you specify. By using the information collected by CloudTrail, you can determine what requests were made to Amazon Web Services Payment Cryptography, who made the request, when it was made, and so on. If you don't configure a trail, you can still view the most recent events in the CloudTrail console. For more information, see the CloudTrail User Guide.
Package session provides a convenient way to store session data (such as a user ID) securely in a web browser cookie or other authentication token. Cookie values generated by this package use modern authenticated encryption, so they can't be inspected or altered by client processes. Most users of this package will use functions Set and Get, which manage cookies directly. An analogous pair of functions, Encode and Decode, help when the session data will be stored somewhere other than a browser cookie; for example, an API token configured by hand in an API client process.
Package snd provides methods and types for sound processing and synthesis. Audio hardware is accessed via package snd/al which in turn manages the dispatching of sound synthesis via golang.org/x/mobile/audio/al. Start the dispatcher as follows: Once running, add a source for sound synthesis. For example: This results in a 440Hz tone being played back through the audio hardware. Synthesis types in package snd implement the Sound interface and many type methods accept a Sound argument that can affect sampling. For example, one may modulate an oscillator by passing in a third argument to NewOscil. The above results in a lower frequency sound that may require decent speakers to hear properly. Note the sine argument in the previous example. There are two conceptual types of sounds, ContinuousFunc and Discrete. ContinuousFunc represents an indefinite series over time. Discrete is the sampling of a ContinuousFunc over an interval. Functions such as Sine, Triangle, and Square (non-exhaustive) return Discretes created by sampling a ContinuousFunc such as SineFunc, TriangleFunc, and SquareFunc. Discrete signals serve as a lookup table to efficiently synthesize sound. A Discrete is a []float64 and can sample any ContinuousFunc, within the package or user defined which is a func(t float64) float64. Discrete signals may be further modified with intent or arbitrarily. For example, Discrete.Add(Discrete, int) performs additive synthesis and is used by functions such as SquareSynthesis(int) to return an approximation of a square signal based on a sinusoidal. Functions that take a time.Duration argument approximate the value to the closest number of frames. For example, if sample rate is 44.1kHz and duration is 75ms, this results in the argument representing 3307 frames which is approximately 74.99ms.
Package oops offers a straightforward and structured approach to error handling in Go applications. It enables you to create, categorize, and manage errors effectively using a system of labels and handlers. Key Features: - Categorize Errors with Label: Define custom error categories using Label error. This allows you to classify application errors consistently. Examples demonstrating how to define these custom categories can be found in the example package. - Create Labeled Errors: Use the New function to create new errors and associate them with your predefined labels. For instance: err := oops.New("failed to process", oops.Tag(example.Duplicated.Error)) - Flexible Error Options: ErrorOption is a function that modifies an Error instance, allowing you to set options like tagging the error with a Label or adding a stack trace with Because. - Stack Traces: Use Because in New function to append stack traces to your errors, providing valuable context for debugging. - Structured Error Handling: -- errors.Is: Handle errors in higher layers of your application using errors.Is to check against specific labeled errors: -- Map Type: The Map type provides a structured way to handle builtin errors. Just define a map of errors to their corresponding *Error instances, and use the Map.Handle method to process errors. The Handle method will append the original error to the stack of the returned *Error. -- Custom Handlers: Define complex Handler functions in different application layers. The Handle function can then be used to process a given error by invoking a series of these custom handlers.
Package scheduler provides job scheduling and task execution capabilities for the modular framework. This module implements a flexible job scheduler that supports both immediate and scheduled job execution, configurable worker pools, job persistence, and comprehensive job lifecycle management. It's designed for reliable background task processing in web applications and services. The scheduler module provides the following capabilities: The module registers a scheduler service for dependency injection: Basic job scheduling: Job with custom options:
Package eventbus provides a flexible event-driven messaging system for the modular framework. This module enables decoupled communication between application components through an event bus pattern. It supports both synchronous and asynchronous event processing, multiple event bus engines, and configurable event handling strategies. The eventbus module offers the following capabilities: The module can be configured through the EventBusConfig structure: The module registers itself as a service for dependency injection: Basic event publishing: Event subscription patterns: Subscription management: The module supports different event processing patterns: **Synchronous Processing**: Events are processed immediately in the same goroutine that published them. Best for lightweight operations and when ordering is important. **Asynchronous Processing**: Events are queued and processed by worker goroutines. Best for heavy operations, external API calls, or when you don't want to block the publisher. Currently supported engines:
Package gostage provides a workflow orchestration and state management system. gostage enables building multi-stage stateful workflows with runtime modification capabilities. It provides a framework for organizing complex processes into manageable stages and actions with rich metadata support. Core components include: Key features include sequential execution, dynamic modification, tag-based organization, type-safe state storage, conditional execution, rich metadata, and serializable state.
Package inframetadata handles host metadata and infrastructure list related features. It stores the host metadata and gohai payload definitions as well as the `Reporter` implementation. A `Reporter` keeps a `HostMap` (a map of hostnames to host metadata payloads) and periodically clears it out and reports the information using a `Pusher` The `Reporter` has three public methods: - The `Run() error` and `Stop()` methods manage its lifecycle - The `ConsumeResource(pcommon.Resource) (bool, error)` method ingests resources, updates host metadata payloads, and reports whether any changes or errors occurred during processing. Internally, the `Reporter` manages a `HostMap`, which has two public methods: - The `Update(host string, resource pcommon.Resource) (changed bool, err error)` method updates a hosts information and reports whether any changes or errors occurred during processing. - The `Extract() map[string]payloads.HostMetadata` method clears out the `HostMap` and returns a copy of its internal information.
Pipeline is a go library that helps you build pipelines without worrying about channel management and concurrency. It contains common fan-in and fan-out operations as well as useful utility funcs for batch processing and scaling. If you have another common use case you would like to see covered by this package, please (open a feature request) https://github.com/deliveryhero/pipeline/issues. * (How to run a pipeline until the container is killed) https://github.com/deliveryhero/pipeline#PipelineShutsDownWhenContainerIsKilled * (How to shut down a pipeline when there is a error) https://github.com/deliveryhero/pipeline#PipelineShutsDownOnError * (How to shut down a pipeline after it has finished processing a batch of data) https://github.com/deliveryhero/pipeline#PipelineShutsDownWhenInputChannelIsClosed The following example shows how you can shutdown a pipeline gracefully when it receives an error message This example demonstrates a pipline that runs until the os / container the pipline is running in kills it The following example demonstrates a pipeline that naturally finishes its run when the input channel is closed
Package workpool implements a pool of go routines that are dedicated to processing work that is posted into the pool. The following is a list of parameters for creating a TraceLog: Go routines are used to manage and process all the work. A single Queue routine provides the safe queuing of work. The Queue routine keeps track of the amount of work in the queue and reports an error if the queue is full. The concurrencyLevel parameter defines the number of work routines to create. These work routines will process work subbmitted to the queue. The work routines keep track of the number of active work routines for reporting. The PostWork method is used to post work into the ThreadPool. This call will block until the Queue routine reports back success or failure that the work is in queue. The following shows a simple test application The following shows some sample output
Package pubsubmutex implements a thread-safe, in-memory, topic-based publish-subscribe system. It is designed for concurrent applications where different parts of the system need to communicate asynchronously without being directly coupled. **Thread Safety:** All operations on the PubSub system, such as subscribing, publishing, and unsubscribing, are safe for concurrent use by multiple goroutines. **Topic-Based Communication:** Clients subscribe to named topics and receive only the messages published to those specific topics. **Configurable Message Delivery:** Topic behavior can be configured using TopicConfig. This allows control over whether messages should be dropped if a subscriber's buffer is full (AllowDropping) or if publishing should block with a specific timeout (PublishTimeout). **Decoupled Architecture:** Each subscriber has an internal buffered channel that decouples the publisher from the consumer. A publisher can send a message without waiting for the subscriber to be ready to process it, improving system responsiveness. **Subscriber Self-Cleanup:** Subscribers can manage their own lifecycle. A client holding a Subscriber instance can call its Unsubscribe() method to cleanly remove itself from the PubSub system. **Automatic Resource Cleanup:** If a message's `Data` field implements the `Cleanable` interface (with a `Cleanup()` method), `Cleanup()` will be called automatically if the message is dropped. This occurs if a subscriber's buffer is full, a publish times out, the subscriber is closing, or if there are no subscribers for the topic at all. This prevents resource leaks. **Graceful Shutdown:** The entire PubSub system can be shut down gracefully using the Close() method, which ensures all active subscribers are unsubscribed and their resources are released. Here are some examples demonstrating how to use the package. ## Initialization and Subscribing First, create a new PubSub system instance and subscribe to a topic. The `Subscribe` method returns a `Subscriber` instance, which contains the channel you will use to receive messages. ## Publishing and Receiving Messages Publish messages to a topic using `ps.Publish()`. To receive them, read from the `Ch` channel on your `Subscriber` instance. It's common to do this in a separate goroutine. ## Self-Unsubscribing A subscriber can clean itself up by calling its `Unsubscribe()` method. This is often done based on some condition, like receiving a specific message. ## Automatic Cleanup of Dropped Messages If a message's payload needs to have resources freed (e.g., closing a file handle), you can implement the `Cleanable` interface. Its `Cleanup()` method will be called if the message is dropped.
Package pipelines provides a set of utilities for creating and managing concurrent data processing pipelines in Go. The library uses channels under the hood to pass data between pipeline stages. Each stage runs in its own goroutine, ensuring concurrency and separation of concerns. Below is an example of an application utilizing pipelines for squaring an odd int and managing shared state counters: Package pipelines provides a set of utilities for creating and managing concurrent data processing pipelines in Go. The package includes various functions to create, manipulate, and control the flow of data through channels, allowing for flexible and efficient data processing. The main components of this package are: - Pipeline: A struct that defines a generic connection of data streams. - DataStream (in subpackage "datastreams"): Provides the methods to build concurrency stages. Pipelines work by connecting a "Source" (an upstream data producer) with an optional chain of transformations or filters before optionally "Sinking" (sending the output to a consumer). Under the hood, all data flows through Go channels with concurrency managed by goroutines. Each transformation or filter is effectively run in parallel, communicating via channels. For more in-depth usage, see the examples below and the doc.go file. Example_sourceSink constructs and starts the Pipeline
Package gomcp provides a Go implementation of the Model Context Protocol (MCP). The Model Context Protocol (MCP) is a standardized communication protocol designed to facilitate interaction between applications and Large Language Models (LLMs). This library provides a complete Go implementation of the protocol with support for all specification versions (2024-11-05, 2025-03-26, and draft) with automatic version detection and negotiation. Starting with v1.5.0, all public APIs are locked and stable. This library is ready for production use with the following guarantees: - Full MCP protocol implementation - Client and server components - Multiple transport options (stdio, HTTP, WebSocket, Server-Sent Events) - Automatic protocol version negotiation - Comprehensive type safety - Support for all MCP operations: tools, resources, prompts, and sampling - Flexible configuration options - Process management for external MCP servers - Server configuration file support The library is organized into the following main packages: ## Client Example ## Server Configuration Example ## Server Example GOMCP includes robust functionality for managing external MCP server processes: For more information, see the docs/examples/server-config.md documentation. This library implements the Model Context Protocol as defined at: https://github.com/microsoft/modelcontextprotocol For detailed documentation, examples, and specifications, see: https://modelcontextprotocol.github.io/ For more examples, see the examples directory in this repository. gomcp follows semantic versioning. Starting with v1.5.0, the APIs are locked and stable, making this library ready for production use. The current version is available through the Version constant.
package casket implements the Casket server manager. To use this package: You should call Wait() on your instance to wait for all servers to quit before your process exits.
Package peer provides a common base for creating and managing Bitcoin network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the bitcoin wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol Full duplex reading and writing of bitcoin protocol messages Automatic handling of the initial handshake process including protocol version negotiation Asynchronous message queuing of outbound messages with optional channel for notification when the message is actually sent Flexible peer configuration Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections asthey see fit (proxies, etc) User agent name and version Bitcoin network Service support signalling (full nodes, bloom filters, etc) Maximum supported protocol version Ability to register callbacks for handling bitcoin protocol messages Inventory message batching and send trickling with known inventory detection and avoidance Automatic periodic keep-alive pinging and pong responses Random Nonce generation and self connection detection Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support Disconnects the peer when the protocol version is high enough Does not invoke the related callbacks for older protocol versions Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version Helper functions pushing addresses, getblocks, getheaders, and reject messages These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization Ability to wait for shutdown/disconnect Comprehensive test coverage All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the bitcoin network to use, which services it supports, and callbacks to invoke when bitcoin messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to bitcoin messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported bitcoin messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all BIPS supported by the wire package. (https://godoc.org/github.com/p9c/pod/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.