Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation. To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases. The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the SyncProducer can still sometimes be lost. To consume messages, use the Consumer. Note that Sarama's Consumer implementation does not currently support automatic consumer-group rebalancing and offset tracking. For Zookeeper-based tracking (Kafka 0.8.2 and earlier), the https://github.com/wvanbergen/kafka library builds on Sarama to add this support. For Kafka-based tracking (Kafka 0.9 and later), the https://github.com/bsm/sarama-cluster library builds on Sarama to add this support. For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire; the Client provides higher-level metadata management that is shared between the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry. Broker related metrics: Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics. Producer related metrics:
Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation. To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases. The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the SyncProducer can still sometimes be lost. To consume messages, use the Consumer. Note that Sarama's Consumer implementation does not currently support automatic consumer-group rebalancing and offset tracking. For Zookeeper-based tracking (Kafka 0.8.2 and earlier), the https://github.com/wvanbergen/kafka library builds on Sarama to add this support. For Kafka-based tracking (Kafka 0.9 and later), the https://github.com/bsm/sarama-cluster library builds on Sarama to add this support. For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire; the Client provides higher-level metadata management that is shared between the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry. Broker related metrics: Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics. Producer related metrics:
Package libvirt-go-xml defines structs for parsing libvirt XML schemas The libvirt API uses XML schemas/documents to describe the configuration of many of its managed objects. Thus when using the libvirt-go package, it is often neccessary to either parse or format XML documents. This package defines a set of Go structs which have been annotated for use with the encoding/xml API to manage libvirt XML documents. Example creating a domain XML document from configuration: Example parsing a domainXML document, in combination with libvirt-go
Package libvirt-go-xml defines structs for parsing libvirt XML schemas The libvirt API uses XML schemas/documents to describe the configuration of many of its managed objects. Thus when using the libvirt-go package, it is often neccessary to either parse or format XML documents. This package defines a set of Go structs which have been annotated for use with the encoding/xml API to manage libvirt XML documents. Example creating a domain XML document from configuration: Example parsing a domainXML document, in combination with libvirt-go
Package irma contains generic IRMA strucs and logic of use to all IRMA participants. It parses irma_configuration folders to scheme managers, issuers, credential types and public keys; it contains various messages from the IRMA protocol; it parses IRMA metadata attributes; and it contains attribute and credential verification logic.
Package letsencrypt provides a module for automatic SSL certificate generation via Let's Encrypt for the modular framework. Package letsencrypt provides a module for automatic SSL certificate generation via Let's Encrypt for the modular framework. This module integrates Let's Encrypt ACME protocol support into the modular framework, enabling automatic SSL/TLS certificate provisioning, renewal, and management. It supports multiple challenge types and DNS providers for flexible certificate acquisition. The letsencrypt module provides the following capabilities: The module supports two ACME challenge types: When using DNS-01 challenges, the following providers are supported: The module can be configured through the LetsEncryptConfig structure: The module registers itself as a certificate service: Basic HTTP server with automatic HTTPS: DNS challenge with Cloudflare: The module automatically handles: - Use staging environment for testing to avoid rate limits - Store API credentials securely (environment variables, secrets) - Ensure proper file permissions for certificate storage - Monitor certificate expiration and renewal logs - Use strong private keys (RSA 2048+ or ECDSA P-256+) Package letsencrypt provides a module for automatic SSL certificate generation via Let's Encrypt for the modular framework.
Package scheduler provides job scheduling and task execution capabilities for the modular framework. This module implements a flexible job scheduler that supports both immediate and scheduled job execution, configurable worker pools, job persistence, and comprehensive job lifecycle management. It's designed for reliable background task processing in web applications and services. The scheduler module provides the following capabilities: The module registers a scheduler service for dependency injection: Basic job scheduling: Job with custom options:
Package httpclient provides a configurable HTTP client module for the modular framework. Package httpclient provides a configurable HTTP client module for the modular framework. Package httpclient provides a configurable HTTP client module for the modular framework. This module offers a production-ready HTTP client with comprehensive configuration options, request/response logging, connection pooling, timeout management, and request modification capabilities. It's designed for reliable HTTP communication in microservices and web applications. The httpclient module provides the following capabilities: The module can be configured through the Config structure: The module registers itself as a service for dependency injection: Basic HTTP requests: Request modification for authentication: Custom timeout scenarios: When verbose logging is enabled, the module logs detailed request and response information including headers, bodies, and timing data. This is invaluable for debugging API integrations and monitoring HTTP performance. Log output includes: The module is optimized for production use with:
Package cache provides a flexible caching module for the modular framework. This module supports multiple cache backends including in-memory and Redis, with configurable TTL, cleanup intervals, and connection management. It provides a unified interface for caching operations across different storage engines. The cache module supports the following engines: The module can be configured through the CacheConfig structure: The module registers itself as a service that can be injected into other modules: Basic caching operations:
Package ssm provides the client and types for making API requests to Amazon Simple Systems Manager (SSM). AWS Systems Manager is a collection of capabilities that helps you automate management tasks such as collecting system inventory, applying operating system (OS) patches, automating the creation of Amazon Machine Images (AMIs), and configuring operating systems (OSs) and applications at scale. Systems Manager lets you remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. This reference is intended to be used with the AWS Systems Manager User Guide (http://docs.aws.amazon.com/systems-manager/latest/userguide/). To get started, verify prerequisites and configure managed instances. For more information, see Systems Manager Prerequisites (http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up.html). For information about other API actions you can perform on Amazon EC2 instances, see the Amazon EC2 API Reference (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/). For information about how to use a Query API, see Making API Requests (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/making-api-requests.html). See https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06 for more information on this service. See ssm package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/ssm/ To Amazon Simple Systems Manager (SSM) with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Simple Systems Manager (SSM) client SSM for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/ssm/#New
Package auth provides authentication and authorization functionality for modular applications. This module supports JWT tokens, session management, and OAuth2 flows. The auth module provides: Usage: The module registers an "auth" service that implements the AuthService interface, providing methods for user login, token validation, and session management. Configuration:
Package eventbus provides a flexible event-driven messaging system for the modular framework. This module enables decoupled communication between application components through an event bus pattern. It supports both synchronous and asynchronous event processing, multiple event bus engines, and configurable event handling strategies. The eventbus module offers the following capabilities: The module can be configured through the EventBusConfig structure: The module registers itself as a service for dependency injection: Basic event publishing: Event subscription patterns: Subscription management: The module supports different event processing patterns: **Synchronous Processing**: Events are processed immediately in the same goroutine that published them. Best for lightweight operations and when ordering is important. **Asynchronous Processing**: Events are queued and processed by worker goroutines. Best for heavy operations, external API calls, or when you don't want to block the publisher. Currently supported engines:
Package webgo is a lightweight framework for building web apps. It has a multiplexer, middleware plugging mechanism & context management of its own. The primary goal of webgo is to get out of the developer's way as much as possible. i.e. it does not enforce you to build your app in any particular pattern, instead just helps you get all the trivial things done faster and easier. e.g. 1. Getting named URI parameters. 2. Multiplexer for regex matching of URI and such. 3. Inject special app level configurations or any such objects to the request context as required.
Package rollbar is a Golang Rollbar client that makes it easy to report errors to Rollbar with full stacktraces. Basic Usage This package is designed to be used via the functions exposed at the root of the `rollbar` package. These work by managing a single instance of the `Client` type that is configurable via the setter functions at the root of the package. If you wish for more fine grained control over the client or you wish to have multiple independent clients then you can create and manage your own instances of the `Client` type. We provide two implementations of the `Transport` interface, `AsyncTransport` and `SyncTransport`. These manage the communication with the network layer. The Async version uses a buffered channel to communicate with the Rollbar API in a separate go routine. The Sync version is fully synchronous. It is possible to create your own `Transport` and configure a Client to use your preferred implementation. Go does not provide a mechanism for handling all panics automatically, therefore we provide two functions `Wrap` and `WrapAndWait` to make working with panics easier. They both take a function and then report to Rollbar if that function panics. They use the recover mechanism to capture the panic, and therefore if you wish your process to have the normal behaviour on panic (i.e. to crash), you will need to re-panic the result of calling `Wrap`. For example, The above pattern of calling `Wrap(...)` and then `Wait(...)` can be combined via `WrapAndWait(...)`. When `WrapAndWait(...)` returns if there was a panic it has already been sent to the Rollbar API. The error is still returned by this function if there is one. Due to the nature of the `error` type in Go, it can be difficult to attribute errors to their original origin without doing some extra work. To account for this, we define the interface `CauseStacker`: One can implement this interface for custom Error types to be able to build up a chain of stack traces. In order to get stack the correct stacks, callers must call BuildStack on their own at the time that the cause is wrapped. This is the least intrusive mechanism for gathering this information due to the decisions made by the Go runtime to not track this information.
Package config provides a flexible and extensible configuration management library for Go applications. Package config provides a powerful, flexible, and extensible configuration management library for Go applications. It supports multiple configuration sources with a clean, type-safe API and Go 1.18+ generics. Created by Ganesh Nemade (https://github.com/gnemade360) The library follows a provider-based architecture where each configuration source (environment variables, files, command-line flags) implements the Provider interface. Providers can be combined using the Sequential provider to create layered configuration with priority ordering. The simplest way to get started is with the Manager: The library includes several built-in providers: Use the configutil package for type-safe configuration access: For applications that need global configuration access: Create custom providers by implementing the Provider interface: The library provides specific error types for different scenarios: For frequently accessed configurations, use the memoized provider: See the individual package documentation for detailed usage examples. This library was created and is maintained by Ganesh Nemade. GitHub: https://github.com/gnemade360 For support, questions, or contributions, please visit the GitHub repository.
Package fm provides a pure Go wrapper around macOS Foundation Models framework. Foundation Models is Apple's on-device large language model framework introduced in macOS 26 Tahoe, providing privacy-focused AI capabilities without requiring internet connectivity. • Streaming-first text generation with LanguageModelSession • Simulated real-time response streaming with word/sentence chunks • Dynamic tool calling with custom Go tools and input validation • Structured output generation with JSON formatting • Context window management (4096 token limit) • Context cancellation and timeout support • Session lifecycle management with proper memory handling • System instructions support • Generation options for temperature, max tokens, and other parameters • Structured logging with Go slog integration for comprehensive debugging • macOS 26 Tahoe or later • Apple Intelligence enabled • Compatible Apple Silicon device Create a session and generate text: Control output with GenerationOptions: Create a session with specific behavior: Foundation Models has a strict 4096 token context window. Monitor usage: Define custom tools that the model can call: Add validation to your tools for better error handling: Register and use tools: Generate structured JSON responses: Cancel long-running requests with context support: Generate responses with simulated real-time streaming output: Note: Current streaming implementation is simulated (breaks complete response into chunks). Native streaming will be implemented when Foundation Models provides streaming APIs. Check if Foundation Models is available: The package provides comprehensive error handling: Always release sessions to prevent memory leaks: • Foundation Models runs entirely on-device • No internet connection required • Processing time depends on prompt complexity and device capabilities • Context window is limited to 4096 tokens • Token estimation is approximate (4 chars per token) • Use context cancellation for long-running requests • Input validation prevents runtime errors and improves performance The package is not thread-safe. Use appropriate synchronization when accessing sessions from multiple goroutines. Context cancellation is goroutine-safe and can be used from any goroutine. This package automatically manages the Swift shim library (libFMShim.dylib) that bridges Foundation Models APIs to C functions callable from Go via purego. The library search strategy: 1. Look for existing libFMShim.dylib in current directory and common paths 2. If not found, automatically extract embedded library to temp directory 3. Load the library and initialize the Foundation Models interface No manual setup required - the package is fully self-contained! • Foundation Models API is still evolving • Some advanced GenerationOptions may not be fully supported yet • Foundation Models tool invocation can be inconsistent due to safety restrictions • Context cancellation cannot interrupt actual model computation • Streaming is currently simulated (post-processing) - native streaming pending Apple API support • macOS 26 Tahoe only ✅ **What Works:** • Tool registration and parameter definition • Swift ↔ Go callback mechanism • Real data fetching (weather, calculations, etc.) • Error handling and validation • Structured logging with Go slog integration ⚠️ **Foundation Models Behavior:** • Tool calling works but can be inconsistent • Some queries may be blocked by safety guardrails • Success rate varies by tool complexity and phrasing The package provides comprehensive debug logging through Go's slog package: Debug logs include: • Session creation and configuration details • Tool registration and parameter validation • Request/response processing with timing • Context usage and memory management • Swift shim layer interaction details See LICENSE file for details. Package fm provides a pure Go wrapper around macOS Foundation Models framework using purego to call a Swift shim library that exports C functions. Foundation Models (macOS 26 Tahoe) provides on-device LLM capabilities including: - Text generation with LanguageModelSession - Streaming responses via delegates or async sequences - Tool calling with requestToolInvocation:with: - Structured outputs with LanguageModelRequestOptions IMPORTANT: Foundation Models has a strict 4096 token context window limit. This package automatically tracks context usage and validates requests to prevent exceeding the limit. Use GetContextSize(), IsContextNearLimit(), and RefreshSession() to manage long conversations. This implementation uses a Swift shim (libFMShim.dylib) that exports C functions using @_cdecl to bridge Swift async methods to synchronous C calls.
Package modular provides a flexible, modular application framework for Go. It supports configuration management, dependency injection, service registration, and multi-tenant functionality. The modular framework allows you to build applications composed of independent modules that can declare dependencies, provide services, and be configured individually. Each module implements the Module interface and can optionally implement additional interfaces like Configurable, ServiceAware, Startable, etc. Basic usage: Package modular provides Observer pattern interfaces for event-driven communication. These interfaces use CloudEvents specification for standardized event format and better interoperability with external systems. Package modular provides CloudEvents integration for the Observer pattern. This file provides CloudEvents utility functions and validation for standardized event format and better interoperability. Package modular provides tenant functionality for multi-tenant applications. This file contains tenant-related types and interfaces. The tenant functionality enables a single application instance to serve multiple isolated tenants, each with their own configuration, data, and potentially customized behavior. Key concepts: Example multi-tenant application setup: Package modular provides tenant-aware functionality for multi-tenant applications. This file contains the core tenant service implementation.
Package database provides database connectivity and management for modular applications. This module supports multiple database connections with different drivers and provides a unified interface for database operations. The database module features: Usage: The module registers database services that provide access to sql.DB instances and higher-level database operations. Other modules can depend on these services for database access. Configuration:
Package httpserver provides an HTTP server module for the modular framework. Package httpserver provides an HTTP server module for the modular framework. Package httpserver provides an HTTP server module for the modular framework. This module offers a complete HTTP server implementation with support for TLS, automatic certificate management, graceful shutdown, and middleware integration. The httpserver module features: Usage: The module registers an HTTP server service that can be used by other modules to register handlers, middleware, or access the underlying server instance. Configuration:
Package chimux provides a Chi-based HTTP router module for the modular framework. This module wraps the popular Go Chi router and integrates it with the modular framework's service system, providing HTTP routing, middleware management, CORS support, and tenant-aware configuration. The chimux module offers the following capabilities: The chimux module requires a TenantApplication to operate. It will return an error if initialized with a regular Application instance. The module can be configured through the ChiMuxConfig structure: The module registers multiple services for different use cases: Basic routing: Advanced routing with Chi features: Middleware integration: The module supports tenant-specific configurations:
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt or loopback. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend (either an HTTP endpoint or a shunt) The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package template provides a robust starter template for building new Go libraries. This package implements foundational patterns and utilities for Go library development and is designed to help developers quickly scaffold, test, and maintain secure, idiomatic Go code. Key features include: - Built-in support for code quality, testing, and CI/CD workflows - Example functions and best-practice patterns for Go libraries The package is structured for modularity and ease of extension, following Go community conventions. It relies on the Go standard library and popular tools for testing and linting. Usage examples: Important notes: - Assumes Go modules are used for dependency management - No external configuration is required for basic usage - Designed for use as a starting point for new Go projects This package is part of the go-template project and is intended to be copied or forked for new Go library development.
Package pkcs11 provides a Go wrapper for PKCS#11 (Cryptoki) operations with HSM (Hardware Security Module) support. This package simplifies interaction with PKCS#11 compliant devices by providing high-level abstractions for common cryptographic operations including key management, digital signing, encryption/decryption, and key wrapping/unwrapping. To use this package, first create a configuration and establish a connection to the PKCS#11 device: You can also configure the client using environment variables: The following environment variables are supported: Generate an RSA key pair: Generate an ECDSA key pair: Find an existing key pair: Get a signer for digital signatures: Get a decrypter for RSA operations: Generate an AES key: Encrypt and decrypt data: Wrap a key with another key: The package provides comprehensive error handling with typed errors: This package is designed to be thread-safe. The Client maintains proper synchronization for session management and can be used concurrently from multiple goroutines. Asymmetric Key Types: Hash Algorithms (for signing): RSA Padding Schemes: Symmetric Key Types: This package is designed for use with Hardware Security Modules (HSMs) and follows security best practices: For complete API documentation, see the individual type and function documentation.
Package workspacesinstances provides the API client, operations, and parameter types for Amazon Workspaces Instances. Amazon WorkSpaces Instances provides an API framework for managing virtual workspace environments across multiple AWS regions, enabling programmatic creation and configuration of desktop infrastructure.
Package entityresolution provides the API client, operations, and parameter types for AWS EntityResolution. Welcome to the Entity Resolution API Reference. Entity Resolution is an Amazon Web Services service that provides pre-configured entity resolution capabilities that enable developers and analysts at advertising and marketing companies to build an accurate and complete view of their consumers. With Entity Resolution, you can match source records containing consumer identifiers, such as name, email address, and phone number. This is true even when these records have incomplete or conflicting identifiers. For example, Entity Resolution can effectively match a source record from a customer relationship management (CRM) system with a source record from a marketing system containing campaign information. To learn more about Entity Resolution concepts, procedures, and best practices, see the Entity Resolution User Guide.
Package elasticache provides the client and types for making API requests to Amazon ElastiCache. Amazon ElastiCache is a web service that makes it easier to set up, operate, and scale a distributed cache in the cloud. With ElastiCache, customers get all of the benefits of a high-performance, in-memory cache with less of the administrative burden involved in launching and managing a distributed cache. The service makes setup, scaling, and cluster failure handling much simpler than in a self-managed cache deployment. In addition, through integration with Amazon CloudWatch, customers get enhanced visibility into the key performance statistics associated with their cache and can receive alarms if a part of their cache runs hot. See https://docs.aws.amazon.com/goto/WebAPI/elasticache-2015-02-02 for more information on this service. See elasticache package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/elasticache/ To Amazon ElastiCache with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon ElastiCache client ElastiCache for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/elasticache/#New
Package session provides a convenient way to store session data (such as a user ID) securely in a web browser cookie or other authentication token. Cookie values generated by this package use modern authenticated encryption, so they can't be inspected or altered by client processes. Most users of this package will use functions Set and Get, which manage cookies directly. An analogous pair of functions, Encode and Decode, help when the session data will be stored somewhere other than a browser cookie; for example, an API token configured by hand in an API client process.
Package cloudtrail provides the client and types for making API requests to AWS CloudTrail. This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail. CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the AWS API call, the source IP address, the request parameters, and the response elements returned by the service. As an alternative to the API, you can use one of the AWS SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to AWSCloudTrail. For example, the SDKs take care of cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page (http://aws.amazon.com/tools/). See the AWS CloudTrail User Guide (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) for information about the data that is included with each AWS API call listed in the log files. See https://docs.aws.amazon.com/goto/WebAPI/cloudtrail-2013-11-01 for more information on this service. See cloudtrail package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/cloudtrail/ To AWS CloudTrail with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS CloudTrail client CloudTrail for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/cloudtrail/#New
Package mresult provides utilities for simplifying mock return values in Go tests. This package helps manage mock results in a clean and type-safe way, reducing boilerplate code when writing table-driven tests with mocks. It supports various return value configurations including single values, multiple values, and error cases. The main types are MResult, MResult0, and MResult2 for handling different numbers of return values, and their corresponding generator functions Generator, Generator0, and Generator2. Example usage:
Package vast_client provides a typed and convenient interface to interact with the VAST Data REST API. It wraps raw HTTP operations in a structured API, exposing high-level methods to manage VAST resources like views, volumes, quotas, snapshots, and more. Each resource is available as a sub-client that supports common CRUD operations (List, Get, GetById, Create, Update, Delete, etc.). The main entry point is the VMSRest client, which is initialized using a VMSConfig configuration struct. This configuration allows customization of connection parameters, credentials (username/password or token), SSL behavior, request timeouts, and request/response hooks.
Package httpserver provides an HTTP server module for the modular framework. Package httpserver provides an HTTP server module for the modular framework. Package httpserver provides an HTTP server module for the modular framework. This module offers a complete HTTP server implementation with support for TLS, automatic certificate management, graceful shutdown, and middleware integration. The httpserver module features: Usage: The module registers an HTTP server service that can be used by other modules to register handlers, middleware, or access the underlying server instance. Configuration:
Package httpclient provides a configurable HTTP client module for the modular framework. Package httpclient provides a configurable HTTP client module for the modular framework. Package httpclient provides a configurable HTTP client module for the modular framework. This module offers a production-ready HTTP client with comprehensive configuration options, request/response logging, connection pooling, timeout management, and request modification capabilities. It's designed for reliable HTTP communication in microservices and web applications. The httpclient module provides the following capabilities: The module can be configured through the Config structure: The module registers itself as a service for dependency injection: Basic HTTP requests: Request modification for authentication: Custom timeout scenarios: When verbose logging is enabled, the module logs detailed request and response information including headers, bodies, and timing data. This is invaluable for debugging API integrations and monitoring HTTP performance. Log output includes: The module is optimized for production use with:
Package letsencrypt provides a module for automatic SSL certificate generation via Let's Encrypt for the modular framework. Package letsencrypt provides a module for automatic SSL certificate generation via Let's Encrypt for the modular framework. This module integrates Let's Encrypt ACME protocol support into the modular framework, enabling automatic SSL/TLS certificate provisioning, renewal, and management. It supports multiple challenge types and DNS providers for flexible certificate acquisition. The letsencrypt module provides the following capabilities: The module supports two ACME challenge types: When using DNS-01 challenges, the following providers are supported: The module can be configured through the LetsEncryptConfig structure: The module registers itself as a certificate service: Basic HTTP server with automatic HTTPS: DNS challenge with Cloudflare: The module automatically handles: - Use staging environment for testing to avoid rate limits - Store API credentials securely (environment variables, secrets) - Ensure proper file permissions for certificate storage - Monitor certificate expiration and renewal logs - Use strong private keys (RSA 2048+ or ECDSA P-256+) Package letsencrypt provides a module for automatic SSL certificate generation via Let's Encrypt for the modular framework.