Package gcs provides an API for building and using a Golomb-coded set filter. A Golomb-Coded Set (GCS) is a space-efficient probabilistic data structure that is used to test set membership with a tunable false positive rate while simultaneously preventing false negatives. In other words, items that are in the set will always match, but items that are not in the set will also sometimes match with the chosen false positive rate. This package currently implements two different versions for backwards compatibility. Version 1 is deprecated and therefore should no longer be used. Version 2 is the GCS variation that follows the specification details in DCP0005: https://github.com/decred/dcps/blob/master/dcp-0005/dcp-0005.mediawiki#golomb-coded-sets. Version 2 sets do not permit empty items (data of zero length) to be added and are parameterized by the following: * A parameter `B` that defines the remainder code bit size * A parameter `M` that defines the false positive rate as `1/M` * A key for the SipHash-2-4 function * The items to include in the set Errors returned by this package are of type gcs.Error. This allows the caller to programmatically determine the specific error by examining the ErrorCode field of the type asserted gcs.Error while still providing rich error messages with contextual information. A convenience function named IsErrorCode is also provided to allow callers to easily check for a specific error code. See ErrorCode in the package documentation for a full list. GCS is used as a mechanism for storing, transmitting, and committing to per-block filters. Consensus-validating full nodes commit to a single filter for every block and serve the filter to SPV clients that match against the filter locally to determine if the block is potentially relevant. The required parameters for Decred are defined by the blockcf2 package. For more details, see the the Block Filters section of DCP0005: https://github.com/decred/dcps/blob/master/dcp-0005/dcp-0005.mediawiki#block-filters
Package podcast generates a fully compliant iTunes and RSS 2.0 podcast feed for GoLang using a simple API. Full documentation with detailed examples located at https://godoc.org/github.com/eduncan911/podcast To use, `go get` and `import` the package like your typical GoLang library. The API exposes a number of method receivers on structs that implements the logic required to comply with the specifications and ensure a compliant feed. A number of overrides occur to help with iTunes visibility of your episodes. Notably, the `Podcast.AddItem` function performs most of the heavy lifting by taking the `Item` input and performing validation, overrides and duplicate setters through the feed. Full detailed Examples of the API are at https://godoc.org/github.com/eduncan911/podcast. In no way are you restricted in having full control over your feeds. You may choose to skip the API methods and instead use the structs directly. The fields have been grouped by RSS 2.0 and iTunes fields. iTunes specific fields are all prefixed with the letter `I`. RSS 2.0: https://cyber.harvard.edu/rss/rss.html Podcasts: https://help.apple.com/itc/podcasts_connect/#/itca5b22233 The 1.x branch is now mostly in maintenance mode, open to PRs. This means no more planned features on the 1.x feature branch is expected. With the success of 6 iTunes-accepted podcasts I have published with this library, and with the feedback from the community, the 1.x releases are now considered stable. The 2.x branch's primary focus is to allow for bi-direction marshalling both ways. Currently, the 1.x branch only allows unmarshalling to a serial feed. An attempt to marshall a serialized feed back into a Podcast form will error or not work correctly. Note that while the 2.x branch is targeted to remain backwards compatible, it is true if using the public API funcs to set parameters only. Several of the underlying public fields are being removed in order to accommodate the marshalling of serialized data. Therefore, a version 2.x is denoted for this release. We use SemVer versioning schema. You can rest assured that pulling 1.x branches will remain backwards compatible now and into the future. However, the new 2.x branch, while keeping the same API, is expected break those that bypass the API methods and use the underlying public properties instead. 1.3.2 1.3.1 1.3.0 1.2.1 1.2.0 1.1.0 1.0.0
Package binding is a middleware that provides request data binding and validation for Chi.
Package scylla implements an efficient shard-aware driver for ScyllaDB. Pass a keyspace and a list of initial node IP addresses to DefaultSessionConfig to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Scylla config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Scylla will use the configured IP address, which is used to index connected hosts. Then you can customize more options (see SessionConfig): When ready, create a session from the configuration and context.Context, once the context is done session will close automatically, stopping requests from being sent and new connections from being made. Don't forget to Close the session once you are done with it and not sure context will be done: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. Currently the driver supports only default password authenticator which can be used like this: It is possible to secure traffic between the client and server with TLS, to do so just pass your tls.Config to session config. For example: The driver by default will route prepared queries to nodes that hold data replicas based on partition key, and non-prepared queries in a round-robin fashion. To route queries to local DC first, use TokenAwareDCAwarePolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use Create queries with Session.Query. Query values can be reused between different but must not be modified during executions of the query. To execute a query use Query.Exec: Result rows can be read like this See Example for complete example. The driver can prepare DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements). CQL protocol does not support preparing other query types. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. The driver supports paging of results with automatic prefetch of 1 page, see Query.PageSize and Query.Iter. It is also possible to control the paging manually with Query.PageState. Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned in Result.PageState by Query.Exec to Query.PageState of a subsequent query to get the next page. If the length of slice in Result.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Scylla returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. If you need to use a custom Retry or HostSelectionPolicy please see the transport package documentation.
Package toml provides facilities for decoding and encoding TOML configuration files via reflection. There is also support for delaying decoding with the Primitive type, and querying the set of keys in a TOML document with the MetaData type. The specification implemented: https://github.com/toml-lang/toml The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify whether a file is a valid TOML document. It can also be used to print the type of each key in a TOML document. There are two important types of tests used for this package. The first is contained inside '*_test.go' files and uses the standard Go unit testing framework. These tests are primarily devoted to holistically testing the decoder and encoder. The second type of testing is used to verify the implementation's adherence to the TOML specification. These tests have been factored into their own project: https://github.com/BurntSushi/toml-test The reason the tests are in a separate project is so that they can be used by any implementation of TOML. Namely, it is language agnostic. Example StrictDecoding shows how to detect whether there are keys in the TOML document that weren't decoded into the value given. This is useful for returning an error to the user if they've included extraneous fields in their configuration. Example UnmarshalTOML shows how to implement a struct type that knows how to unmarshal itself. The struct must take full responsibility for mapping the values passed into the struct. The method may be used with interfaces in a struct in cases where the actual type is not known until the data is examined. Example Unmarshaler shows how to decode TOML strings into your own custom data type.
Package arigo is a library to communicate with the aria2 RPC interface. aria2 is a utility for downloading files. The supported protocols are HTTP(S), FTP, SFTP, BitTorrent, and Metalink. aria2 can download a file from multiple sources/protocols and tries to utilize your maximum download bandwidth. It supports downloading a file from HTTP(S)/FTP /SFTP and BitTorrent at the same time, while the data downloaded from HTTP(S)/FTP/SFTP is uploaded to the BitTorrent swarm. Using Metalink chunk checksums, aria2 automatically validates chunks of data while downloading a file. You can read more about aria2 here: https://aria2.github.io/
Package jsonschema provides JSON Schema validation capabilities for the modular framework. This module integrates JSON Schema validation into the modular framework, allowing applications to validate JSON data against predefined schemas. It supports schema compilation from files or URLs and provides multiple validation methods for different data sources. The jsonschema module provides the following capabilities: Schemas can be loaded from various sources: The module registers a JSON schema service for dependency injection: Schema compilation and basic validation: Validating different data sources: HTTP API validation example: Configuration validation: User schema example (user.json): The module provides detailed error information for validation failures, including the specific path and reason for each validation error. This helps in providing meaningful feedback to users and debugging schema issues.
Package siris is a fully-featured HTTP/2 backend web framework written entirely in Google’s Go Language. Source code and other details for the project are available at GitHub: The only requirement is the Go Programming Language, at least version 1.8 Example code: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, Siris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: Siris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of Siris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known paramete and custom http errors, now it's time to see wildcard parameters and macros. Siris, like net/http std package registers route's handlers by a Handler, the Siris' type of handler is just a func(ctx context.Context) where context comes from github.com/go-siris/siris/context. Until go 1.9 you will have to import that package too, after go 1.9 this will be not be necessary. Siris has the easiest and the most powerful routing process you have ever meet. At the same time, Siris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, I am calling them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that Siris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/go-siris/siris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: Siris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Siris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context.ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/go-siris/siris/tree/master/_examples/intermediate/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Each one of these template engines has different options located here: https://github.com/go-siris/siris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Built'n Middleware: Community Middleware: Home Page:
Package siris is a fully-featured HTTP/2 backend web framework written entirely in Google’s Go Language. Source code and other details for the project are available at GitHub: The only requirement is the Go Programming Language, at least version 1.8 Example code: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, Siris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: Siris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of Siris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known paramete and custom http errors, now it's time to see wildcard parameters and macros. Siris, like net/http std package registers route's handlers by a Handler, the Siris' type of handler is just a func(ctx context.Context) where context comes from github.com/go-siris/siris/context. Until go 1.9 you will have to import that package too, after go 1.9 this will be not be necessary. Siris has the easiest and the most powerful routing process you have ever meet. At the same time, Siris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, I am calling them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that Siris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/go-siris/siris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: Siris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Siris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context.ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/go-siris/siris/tree/master/_examples/intermediate/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Each one of these template engines has different options located here: https://github.com/go-siris/siris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Built'n Middleware: Community Middleware: Home Page:
Package fm provides a pure Go wrapper around macOS Foundation Models framework. Foundation Models is Apple's on-device large language model framework introduced in macOS 26 Tahoe, providing privacy-focused AI capabilities without requiring internet connectivity. • Streaming-first text generation with LanguageModelSession • Simulated real-time response streaming with word/sentence chunks • Dynamic tool calling with custom Go tools and input validation • Structured output generation with JSON formatting • Context window management (4096 token limit) • Context cancellation and timeout support • Session lifecycle management with proper memory handling • System instructions support • Generation options for temperature, max tokens, and other parameters • Structured logging with Go slog integration for comprehensive debugging • macOS 26 Tahoe or later • Apple Intelligence enabled • Compatible Apple Silicon device Create a session and generate text: Control output with GenerationOptions: Create a session with specific behavior: Foundation Models has a strict 4096 token context window. Monitor usage: Define custom tools that the model can call: Add validation to your tools for better error handling: Register and use tools: Generate structured JSON responses: Cancel long-running requests with context support: Generate responses with simulated real-time streaming output: Note: Current streaming implementation is simulated (breaks complete response into chunks). Native streaming will be implemented when Foundation Models provides streaming APIs. Check if Foundation Models is available: The package provides comprehensive error handling: Always release sessions to prevent memory leaks: • Foundation Models runs entirely on-device • No internet connection required • Processing time depends on prompt complexity and device capabilities • Context window is limited to 4096 tokens • Token estimation is approximate (4 chars per token) • Use context cancellation for long-running requests • Input validation prevents runtime errors and improves performance The package is not thread-safe. Use appropriate synchronization when accessing sessions from multiple goroutines. Context cancellation is goroutine-safe and can be used from any goroutine. This package automatically manages the Swift shim library (libFMShim.dylib) that bridges Foundation Models APIs to C functions callable from Go via purego. The library search strategy: 1. Look for existing libFMShim.dylib in current directory and common paths 2. If not found, automatically extract embedded library to temp directory 3. Load the library and initialize the Foundation Models interface No manual setup required - the package is fully self-contained! • Foundation Models API is still evolving • Some advanced GenerationOptions may not be fully supported yet • Foundation Models tool invocation can be inconsistent due to safety restrictions • Context cancellation cannot interrupt actual model computation • Streaming is currently simulated (post-processing) - native streaming pending Apple API support • macOS 26 Tahoe only ✅ **What Works:** • Tool registration and parameter definition • Swift ↔ Go callback mechanism • Real data fetching (weather, calculations, etc.) • Error handling and validation • Structured logging with Go slog integration ⚠️ **Foundation Models Behavior:** • Tool calling works but can be inconsistent • Some queries may be blocked by safety guardrails • Success rate varies by tool complexity and phrasing The package provides comprehensive debug logging through Go's slog package: Debug logs include: • Session creation and configuration details • Tool registration and parameter validation • Request/response processing with timing • Context usage and memory management • Swift shim layer interaction details See LICENSE file for details. Package fm provides a pure Go wrapper around macOS Foundation Models framework using purego to call a Swift shim library that exports C functions. Foundation Models (macOS 26 Tahoe) provides on-device LLM capabilities including: - Text generation with LanguageModelSession - Streaming responses via delegates or async sequences - Tool calling with requestToolInvocation:with: - Structured outputs with LanguageModelRequestOptions IMPORTANT: Foundation Models has a strict 4096 token context window limit. This package automatically tracks context usage and validates requests to prevent exceeding the limit. Use GetContextSize(), IsContextNearLimit(), and RefreshSession() to manage long conversations. This implementation uses a Swift shim (libFMShim.dylib) that exports C functions using @_cdecl to bridge Swift async methods to synchronous C calls.
Package modular provides a flexible, modular application framework for Go. It supports configuration management, dependency injection, service registration, and multi-tenant functionality. The modular framework allows you to build applications composed of independent modules that can declare dependencies, provide services, and be configured individually. Each module implements the Module interface and can optionally implement additional interfaces like Configurable, ServiceAware, Startable, etc. Basic usage: Package modular provides Observer pattern interfaces for event-driven communication. These interfaces use CloudEvents specification for standardized event format and better interoperability with external systems. Package modular provides CloudEvents integration for the Observer pattern. This file provides CloudEvents utility functions and validation for standardized event format and better interoperability. Package modular provides tenant functionality for multi-tenant applications. This file contains tenant-related types and interfaces. The tenant functionality enables a single application instance to serve multiple isolated tenants, each with their own configuration, data, and potentially customized behavior. Key concepts: Example multi-tenant application setup: Package modular provides tenant-aware functionality for multi-tenant applications. This file contains the core tenant service implementation.
Package nject is a general purpose dependency injection framework. It provides wrapping, pruning, and indirect variable passing. It is type safe and using it requires no type assertions. There are two main injection APIs: Run and Bind. Bind is designed to be used at program initialization and does as much work as possible then rather than during main execution. The API for nject is a list of providers (injectors) that are run in order. The final function in the list must be called. The other functions are called if their value is consumed by a later function that must be called. Here is a simple example: In this example, context.Background and log.Default are not invoked because their outputs are not used by the final function (http.ListenAndServe). The basic idea of nject is to assemble a Collection of providers and then use that collection to supply inputs for functions that may use some or all of the provided types. One big win from dependency injection with nject is the ability to reshape various different functions into a single signature. For example, having a bunch of functions with different APIs all bound as http.HandlerFunc is easy. Providers produce or consume data. The data is distinguished by its type. If you want to three different strings, then define three different types: Then you can have a function that does things with the three types: The above function would be a valid injector or final function in a provider Collection. For example: This creates a sequence and executes it. Run injects a myFirst value and the sequence of providers runs: genSecond() injects a mySecond and myStringFunc() combines the myFirst and mySecond to create a myThird. Then the function given in run saves that final value. The expected output is Providers are grouped as into linear sequences. When building an injection chain, the providers are grouped into several sets: LITERAL, STATIC, RUN. The LITERAL and STATIC sets run once per initialization. The RUN set runs once per invocation. Providers within a set are executed in the order that they were originally specified. Providers whose outputs are not consumed are omitted unless they are marked Required(). Collections are bound with Bind(&invocationFunction, &initializationFunction). The invocationFunction is expected to be used over and over, but the initializationFunction is expected to be used less frequently. The STATIC set is re-invoked each time the initialization function is run. The LITERAL set is just the literal values in the collection. The STATIC set is composed of the cacheable injectors. The RUN set if everything else. All injectors have the following type signature: None of the input or output parameters may be anonymously-typed functions. An anoymously-typed function is a function without a named type. Injectors whose output values are not used by a downstream handler are dropped from the handler chain. They are not invoked. Injectors that have no output values are a special case and they are always retained in the handler chain. In injector that is annotated as Cacheable() may promoted to the STATIC set. An injector that is annotated as MustCache() must be promoted to the STATIC set: if it cannot be promoted then the collection is deemed invalid. An injector may not be promoted to the STATIC set if it takes as input data that comes from a provider that is not in the STATIC or LITERAL sets. For example, arguments to the invocation function, if the invoke function takes an int as one of its inputs, then no injector that takes an int as an argument may be promoted to the STATIC set. Injectors in the STATIC set will be run exactly once per set of input values. If the inputs are consistent, then the output will be a singleton. This is true across injection chains. If the following provider is used in multiple chains, as long as the same integer is injected, all chains will share the same pointer. Injectors in the STATIC set are only run for initialization. For some things, like opening a database, that may still be too often. Injectors that are marked Memoized must be promoted to the static set. Memoized injectors are only run once per combination of inputs. Their outputs are remembered. If called enough times with different arguments, memory will be exhausted. Memoized injectors may not have more than 90 inputs. Memoized injectors may not have any inputs that are go maps, slices, or functions. Arrays, structs, and interfaces are okay. This requirement is recursive so a struct that that has a slice in it is not okay. Fallible injectors are special injectors that change the behavior of the injection chain if they return error. Fallible injectors in the RUN set, that return error will terminate execution of the injection chain. A non-wrapper function that returns nject.TerminalError is a fallible injector. The TerminalError does not have to be the last return value. The nject package converts TerminalError objects into error objects so only the fallible injector should use TerminalError. Anything that consumes the TerminalError should do so by consuming error instead. Fallible injectors can be in both the STATIC set and the RUN set. Their behavior is a bit different. If a non-nil value is returned as the TerminalError from a fallible injector in the RUN set, none of the downstream providers will be called. The provider chain returns from that point with the TerminalError as a return value. Since all return values must be consumed by a middleware provider or the bound invoke function, fallible injectors must come downstream from a middleware handler that takes error as a returned value if the invoke function (function that runs a bound injection chain) does not return error. If a fallible injector returns nil for the TerminalError, the other output values are made available for downstream handlers to consume. The other output values are not considered return values and are not available to be consumed by upstream middleware handlers. The error returned by a fallible injector is not available downstream. If a non-nil value is returned as the TerminalError from a fallible injector in the STATIC set, the rest of the STATIC set will be skipped. If there is an init function and it returns error, then the value returned by the fallible injector will be returned via init function. Unlike fallible injectors in the RUN set, the error output by a fallible injector in the STATIC set is available downstream (but only in the RUN set -- nothing else in the STATIC set will execute). Some examples: A wrap function interrupts the linear sequence of providers. It may or may invoke the remainder of the sequence that comes after it. The remainder of the sequence is provided to the wrap function as a function that it may call. The type signature of a wrap function is a function that receives an function as its first parameter. That function must be of an anonymous type: For example: When this wrappper function runs, it is responsible for invoking the rest of the provider chain. It does this by calling inner(). The parameters to inner are available as inputs to downstream providers. The value(s) returned by inner come from the return values of other wrapper functions and from the return value(s) of the final function. Wrap functions can call inner() zero or more times. The values returned by wrap functions must be consumed by another upstream wrap function or by the init function (if using Bind()). Wrap functions have a small amount of runtime overhead compared to other kinds of functions: one call to reflect.MakeFunc(). Wrap functions serve the same role as middleware, but are usually easier to write. Wrap functions that invoke inner() multiple times in parallel are are not well supported at this time and such invocations must have the wrap function decorated with Parallel(). Final functions are simply the last provider in the chain. They look like regular Go functions. Their input parameters come from other providers. Their return values (if any) must be consumed by an upstream wrapper function or by the init function (if using Bind()). Wrap functions that return error should take error as a returned value so that they do not mask a downstream error. Wrap functions should not return TerminalError because they internally control if the downstream chain is called. Literal values are values in the provider chain that are not functions. Provider chains can be invalid for many reasons: inputs of a type not provided earlier in the chain; annotations that cannot be honored (eg. MustCache & Memoize); return values that are not consumed; functions that take or return functions with an anymous type other than wrapper functions; A chain that does not terminate with a function; etc. Bind() and Run() will return error when presented with an invalid provider chain. Bind() and Run() will return error rather than panic. After Bind()ing an init and invoke function, calling them will not panic unless a provider panic()s A wrapper function can be used to catch panics and turn them into errors. When doing that, it is important to propagate any errors that are coming up the chain. If there is no guaranteed function that will return error, one can be added with Shun(). Bind() uses a complex and somewhat expensive O(n^2) set of rules to evaluate which providers should be included in a chain and which can be dropped. The goal is to keep the ones you want and remove the ones you don't want. Bind() tries to figure this out based on the dependencies and the annotations. MustConsume, not Desired: Only include if at least one output is transitively consumed by a Required or Desired chain element and all outputs are consumed by some other provider. Not MustConsume, not Desired: only include if at least one output is transitively consumed by a Required or Desired provider. Not MustConsume, Desired: Include if all inputs are available. MustConsume, Desired: Only include if all outputs are transitively consumed by a required or Desired chain element. When there are multiple providers of a type, Bind() tries to get it from the closest provider. Providers that have unmet dependencies will be eliminated from the chain unless they're Required. The remainder of this document consists of suggestions for how to use nject. Contributions to this section would be welcome. Also links to blogs or other discussions of using nject in practice. The best practice for using nject inside a large project is to have a few common chains that everyone imports. Most of the time, these common chains will be early in the sequence of providers. Customization of the import chains happens in many places. This is true for services, libraries, and tests. For tests, a wrapper that includes the standard chain makes it easier to write tests. See github.com/memsql/ntest for helper functions and more examples. If nject cannot bind or run a chain, it will return error. The returned error is generally very good, but it does not contain the full debugging output. The full debugging output can be obtained with the DetailedError function. If the detailed error shows that nject has a bug, note that part of the debug output includes a regression test that can be turned into an nject issue. Remove the comments to hide the original type names. The Reorder() decorator allows injection chains to be fully or partially reordered. Reorder is currently limited to a single pass and does not know which injectors are ultimately going to be included in the final chain. It is likely that if you mark your entire chain with Reorder, you'll have unexpected results. On the other hand, Reorder provides safe and easy way to solve some common problems. For example: providing optional options to an injected dependency. Because the default options are marked as Shun, they'll only be included if they have to be included. If a user of thingChain wants to override the options, they simply need to mark their override as Reorder. To make this extra friendly, a helper function to do the override can be provided and used. Recommended best practice is to have injectors shutdown the things they themselves start. They should do their own cleanup. Inside tests, an injector can use t.Cleanup() for this. For services, something like t.Cleanup can easily be built: Alternatively, any wrapper function can do it's own cleanup in a defer that it defines. Wrapper functions have a small runtime performance penalty, so if you have more than a couple of providers that need cleanup, it makes sense to include something like CleaningService. The normal direction of forced inclusion is that an upstream provider is required because a downstream provider uses a type produced by the upstream provider. There are times when the relationship needs to be reversed. For example, a type gets modified by a downstream injector. The simplest option is to combine the providers into one function. Another possibility is to mark the upstream provider with MustConsume and have it produce a type that is only consumed by the downstream provider. Lastly, the providers can be grouped with Cluster so that they'll be included or excluded as a group. Example shows what gets included and what does not for several injection chains. These examples are meant to show the subtlety of what gets included and why. This example explores injecting a database handle or transaction only when they're used.
Package bolt implements a low-level key/value store in pure Go. It supports fully serializable transactions, ACID semantics, and lock-free MVCC with multiple readers and a single writer. Bolt can be used for projects that want a simple data store without the need to add large dependencies such as Postgres or MySQL. Bolt is a single-level, zero-copy, B+tree data store. This means that Bolt is optimized for fast read access and does not require recovery in the event of a system crash. Transactions which have not finished committing will simply be rolled back in the event of a crash. The design of Bolt is based on Howard Chu's LMDB database project. Bolt currently works on Windows, Mac OS X, and Linux. There are only a few types in Bolt: DB, Bucket, Tx, and Cursor. The DB is a collection of buckets and is represented by a single file on disk. A bucket is a collection of unique keys that are associated with values. Transactions provide either read-only or read-write access to the database. Read-only transactions can retrieve key/value pairs and can use Cursors to iterate over the dataset sequentially. Read-write transactions can create and delete buckets and can insert and remove keys. Only one read-write transaction is allowed at a time. The database uses a read-only, memory-mapped data file to ensure that applications cannot corrupt the database, however, this means that keys and values returned from Bolt cannot be changed. Writing to a read-only byte slice will cause Go to panic. Keys and values retrieved from the database are only valid for the life of the transaction. When used outside the transaction, these byte slices can point to different data or can point to invalid memory which will cause a panic.
Package toml provides facilities for decoding and encoding TOML configuration files via reflection. There is also support for delaying decoding with the Primitive type, and querying the set of keys in a TOML document with the MetaData type. The specification implemented: https://github.com/toml-lang/toml The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify whether a file is a valid TOML document. It can also be used to print the type of each key in a TOML document. There are two important types of tests used for this package. The first is contained inside '*_test.go' files and uses the standard Go unit testing framework. These tests are primarily devoted to holistically testing the decoder and encoder. The second type of testing is used to verify the implementation's adherence to the TOML specification. These tests have been factored into their own project: https://github.com/BurntSushi/toml-test The reason the tests are in a separate project is so that they can be used by any implementation of TOML. Namely, it is language agnostic. Example StrictDecoding shows how to detect whether there are keys in the TOML document that weren't decoded into the value given. This is useful for returning an error to the user if they've included extraneous fields in their configuration. Example UnmarshalTOML shows how to implement a struct type that knows how to unmarshal itself. The struct must take full responsibility for mapping the values passed into the struct. The method may be used with interfaces in a struct in cases where the actual type is not known until the data is examined. Example Unmarshaler shows how to decode TOML strings into your own custom data type.
Package autofiber provides a FastAPI-like wrapper for the Fiber web framework. It enables automatic request parsing, validation, and OpenAPI/Swagger documentation generation. Package autofiber provides OpenAPI 3.0 specification generation for automatic API documentation. Package autofiber provides OpenAPI/Swagger documentation configuration and serving utilities. Package autofiber provides route group functionality with automatic request parsing, validation, and documentation generation. Package autofiber provides handler creation utilities for automatic request parsing, validation, and response handling. Package autofiber provides map and interface parsing utilities for converting data structures to Go structs. Package autofiber provides middleware functions for automatic request parsing, validation, and response handling. Package autofiber provides route configuration options for building APIs with automatic parsing, validation, and documentation. Package autofiber provides request parsing utilities for extracting and validating data from multiple sources. Package autofiber provides HTTP route registration methods with automatic request parsing, validation, and documentation generation. Package autofiber provides core types and configuration for the AutoFiber web framework. Package autofiber provides response validation utilities for ensuring API responses match expected schemas.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: By default, PasswordAuthenticator will attempt to authenticate regardless of what implementation the server returns in its AUTHENTICATE message as its authenticator, (e.g. org.apache.cassandra.auth.PasswordAuthenticator). If you wish to restrict this you may use PasswordAuthenticator.AllowedAuthenticators: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.Batch to create a new batch and then fill-in details of individual queries. Then execute the batch with Batch.Exec. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Batch.Exec prepares individual statements in the batch. If you have variable-length batches using the same statement, using Batch.Exec is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Batch.ExecCAS and Batch.MapExecCAS when executing the batch to learn about the result of the LWT. See example for Batch.MapExecCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package temporalbuffer provides a thread-safe, fixed-size buffer for time-stamped data, designed to create a perfectly smooth, continuous stream from an irregular or bursty input. The package is built with Go Generics, providing compile-time type safety. You create a buffer for your specific type, and all operations like Add() and GetOldest() work directly with that type, eliminating the need for type assertions. Key Features: Type-Safe Generics: Create a buffer for any type that satisfies the DataItem interface (i.e., has a CreatedTime() method). Intelligent Smoothing: The default "ResampleTimeline" strategy ensures the buffer's contents are always the smoothest possible representation of the real data points. Alternative strategies like "FillLargestGap" and "PadWithNewest" are also available. Guaranteed Read Continuity: By default, a read from the buffer is guaranteed to return a valid item (either a real one or the last-read item), preventing the need for consumer-side timeouts or logic to handle an empty buffer. Highly Configurable: The fill strategy, drop strategy (DropOldest vs. DropClosest), and read continuity can all be easily configured via options. Example: Basic Usage Example: Using a Different Fill Strategy Example: Simplified Streaming Consumer
Package gollm provides a high-level interface for interacting with various Language Learning Models (LLMs). This file re-exports configuration types and functions from the config package to provide a clean, centralized API for configuring LLM interactions. Package gollm provides a high-level interface for interacting with various Language Learning Models (LLMs). It supports multiple providers including OpenAI, Anthropic, Ollama, and others, with features like prompt optimization, caching, and structured output handling. Package gollm provides a high-level interface for interacting with Language Learning Models (LLMs). Package gollm provides prompt handling and manipulation functionality for Language Learning Models. This file contains type definitions, re-exports, and utility functions for working with prompts and their associated components like caching, templates, and message handling. Package gollm provides streaming functionality for Language Learning Models. This file contains type definitions and re-exports for working with streaming responses. Package gollm provides validation functionality for Language Learning Model interactions. This file contains utilities for validating structured data and generating JSON schemas, which are essential for ensuring proper data formats in LLM communications.
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use Open to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name formats): where all parameters must be escaped or use `Config` and `DSN` to construct a DSN string. The following example opens a database handle with the Snowflake account myaccount where the username is jsmith, password is mypassword, database is mydb, schema is testschema, and warehouse is mywh: The following connection parameters are supported: account <string>: Specifies the name of your Snowflake account, where string is the name assigned to your account by Snowflake. In the URL you received from Snowflake, your account name is the first segment in the domain (e.g. abc123 in https://abc123.snowflakecomputing.com). This parameter is optional if your account is specified after the @ character. If you are not on us-west-2 region or AWS deployment, then append the region after the account name, e.g. “<account>.<region>”. If you are not on AWS deployment, then append not only the region, but also the platform, e.g., “<account>.<region>.<platform>”. Account, region, and platform should be separated by a period (“.”), as shown above. If you are using a global url, then append connection group and "global", e.g., "account-<connection_group>.global". Account and connection group are separated by a dash ("-"), as shown above. region <string>: DEPRECATED. You may specify a region, such as “eu-central-1”, with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using MFA for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection All other parameters are taken as session parameters. For example, TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. :code:`no_proxy=.amazonaws.com` means that AWS S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is NOP; no output is generated. This is intentional for those applications that use the same set of logger parameters not to conflict with glog, which is incorporated in the driver logging framework. In order to enable debug logging for the driver, add a build tag sfdebug to the go tool command lines, for example: For tests, run the test command with the tag along with glog parameters. For example, the following command will generate all acitivty logs in the standard error. Likewise, if you build your application with the tag, you may specify the same set of glog parameters. To get the logs for a specific module, use the -vmodule option. For example, to retrieve the driver.go and connection.go module logs: Note: If your request retrieves no logs, call db.Close() or glog.flush() to flush the glog buffer. Note: The logger may be changed in the future for better logging. Currently if the applications use the same parameters as glog, you cannot collect both application and driver logs at the same time. From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter, e.g., QueryContext, ExecContext. See cmd/selectmany.go for the full example. Queries return SQL column type information in the ColumnType type. The DatabaseTypeName method returns the following strings representing Snowflake data types: Go's database/sql package limits Go's data types to the following for binding and fetching: Fetching data isn't an issue since the database data type is provided along with the data so the Go Snowflake Driver can translate Snowflake data types to Go native data types. When the client binds data to send to the server, however, the driver cannot determine the date/timestamp data types to associate with binding parameters. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake doesn't support the name-based Location types, e.g., America/Los_Angeles. For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this doesn't help reduce memory footprint by itself. Consider Custom JSON Decoder. Experimental: Custom JSON Decoder for parsing Result Set The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. (Private Preview) JWT authentication ** Not recommended for production use until GA Now JWT token is supported when compiling with a golang version of 1.10 or higher. Binary compiled with lower version of golang would return an error at runtime when users try to use JWT authentication feature. To enable this feature, one can construct DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through base64.URLEncoding.EncodeToString() function. On the server side, one can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through base64.StdEncoding.EncodeToString() function. To generate the valid key pair, one can do the following command on the shell script: GET and PUT operations are unsupported.
Package transform is the SDK for Redpanda's inline Data Transforms, based on WebAssembly. This library provides a framework for transforming records written within Redpanda from an input to an output topic. This version of the SDK is compatible with Redpanda 24.1 or greater. This example shows the basic usage of the package: This is a "transform" that does nothing but copies the same data to an new topic. This example shows a filter that uses a regexp to filter records from one topic into another. The filter can be determined when the transform is deployed by using environment variables to specify the pattern. This example shows a transform that converts CSV into JSON. This example shows the basic usage of the package: This is a transform that validates the data is valid JSON, and outputs invalid JSON to a dead letter queue.
Package address is a library that validates and formats addresses using data generated from Google's Address Data Service. Code generated by address. DO NOT EDIT.
Package validation provides configurable and extensible rules for validating data of various types.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use Open to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name formats): where all parameters must be escaped or use `Config` and `DSN` to construct a DSN string. The following example opens a database handle with the Snowflake account myaccount where the username is jsmith, password is mypassword, database is mydb, schema is testschema, and warehouse is mywh: The following connection parameters are supported: account <string>: Specifies the name of your Snowflake account, where string is the name assigned to your account by Snowflake. In the URL you received from Snowflake, your account name is the first segment in the domain (e.g. abc123 in https://abc123.snowflakecomputing.com). This parameter is optional if your account is specified after the @ character. If you are not on us-west-2 region or AWS deployment, then append the region after the account name, e.g. “<account>.<region>”. If you are not on AWS deployment, then append not only the region, but also the platform, e.g., “<account>.<region>.<platform>”. Account, region, and platform should be separated by a period (“.”), as shown above. If you are using a global url, then append connection group and "global", e.g., "account-<connection_group>.global". Account and connection group are separated by a dash ("-"), as shown above. region <string>: DEPRECATED. You may specify a region, such as “eu-central-1”, with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using MFA for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection All other parameters are taken as session parameters. For example, TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. :code:`no_proxy=.amazonaws.com` means that AWS S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is NOP; no output is generated. This is intentional for those applications that use the same set of logger parameters not to conflict with glog, which is incorporated in the driver logging framework. In order to enable debug logging for the driver, add a build tag sfdebug to the go tool command lines, for example: For tests, run the test command with the tag along with glog parameters. For example, the following command will generate all acitivty logs in the standard error. Likewise, if you build your application with the tag, you may specify the same set of glog parameters. To get the logs for a specific module, use the -vmodule option. For example, to retrieve the driver.go and connection.go module logs: Note: If your request retrieves no logs, call db.Close() or glog.flush() to flush the glog buffer. Note: The logger may be changed in the future for better logging. Currently if the applications use the same parameters as glog, you cannot collect both application and driver logs at the same time. From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter, e.g., QueryContext, ExecContext. See cmd/selectmany.go for the full example. Queries return SQL column type information in the ColumnType type. The DatabaseTypeName method returns the following strings representing Snowflake data types: Go's database/sql package limits Go's data types to the following for binding and fetching: Fetching data isn't an issue since the database data type is provided along with the data so the Go Snowflake Driver can translate Snowflake data types to Go native data types. When the client binds data to send to the server, however, the driver cannot determine the date/timestamp data types to associate with binding parameters. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake doesn't support the name-based Location types, e.g., America/Los_Angeles. For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this doesn't help reduce memory footprint by itself. Consider Custom JSON Decoder. Experimental: Custom JSON Decoder for parsing Result Set The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. (Private Preview) JWT authentication ** Not recommended for production use until GA Now JWT token is supported when compiling with a golang version of 1.10 or higher. Binary compiled with lower version of golang would return an error at runtime when users try to use JWT authentication feature. To enable this feature, one can construct DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through base64.URLEncoding.EncodeToString() function. On the server side, one can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through base64.StdEncoding.EncodeToString() function. To generate the valid key pair, one can do the following command on the shell script: GET and PUT operations are unsupported.
Package toml provides facilities for decoding and encoding TOML configuration files via reflection. There is also support for delaying decoding with the Primitive type, and querying the set of keys in a TOML document with the MetaData type. The specification implemented: https://github.com/toml-lang/toml The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify whether a file is a valid TOML document. It can also be used to print the type of each key in a TOML document. There are two important types of tests used for this package. The first is contained inside '*_test.go' files and uses the standard Go unit testing framework. These tests are primarily devoted to holistically testing the decoder and encoder. The second type of testing is used to verify the implementation's adherence to the TOML specification. These tests have been factored into their own project: https://github.com/BurntSushi/toml-test The reason the tests are in a separate project is so that they can be used by any implementation of TOML. Namely, it is language agnostic. Example StrictDecoding shows how to detect whether there are keys in the TOML document that weren't decoded into the value given. This is useful for returning an error to the user if they've included extraneous fields in their configuration. Example UnmarshalTOML shows how to implement a struct type that knows how to unmarshal itself. The struct must take full responsibility for mapping the values passed into the struct. The method may be used with interfaces in a struct in cases where the actual type is not known until the data is examined. Example Unmarshaler shows how to decode TOML strings into your own custom data type.
Package jsonrpc2 is a complete and strictly conforming implementation of the JSON-RPC 2.0 protocol for both clients and servers. https://www.jsonrpc.org. Clients use the provided types, optionally along with their own custom data types for making Requests and parsing Responses. The Request and Response types are defined so that they can accept any valid types for "id", "params", and "result". Clients can use the Request, Response, and Error types with the json and http packages to make HTTP JSON-RPC 2.0 calls and parse their responses. Servers define their own MethodFuncs and associate them with a method name in a MethodMap. Passing the MethodMap to HTTPRequestHandler() will return a corresponding http.Handler which can be used with an http.Server. The http.Handler handles both batch and single requests, catches all protocol errors, and recovers from any panics or invalid return values from the user provided MethodFunc. MethodFuncs only need to catch errors related to their function such as Invalid Params or any user defined errors for the RPC method. This example makes all of the calls from the examples in the JSON-RPC 2.0 specification and prints them in a similar format.
Package uplink is the main entrypoint to interacting with Storj Labs' decentralized storage network. Sign up for an account on a Satellite today! https://storj.io/ The fundamental unit of access in the Storj Labs storage network is the Access Grant. An access grant is a serialized structure that is internally comprised of an API Key, a set of encryption key information, and information about which Storj Labs or Tardigrade network Satellite is responsible for the metadata. An access grant is always associated with exactly one Project on one Satellite. If you don't already have an access grant, you will need make an account on a Satellite, generate an API Key, and encapsulate that API Key with encryption information into an access grant. If you don't already have an account on a Satellite, first make one at https://storj.io/ and note the Satellite you choose (such as us1.storj.io, eu1.storj.io, etc). Then, make an API Key in the web interface. The first step to any project is to generate a restricted access grant with the minimal permissions that are needed. Access grants contains all encryption information and they should be restricted as much as possible. To make an access grant, you can create one using our Uplink CLI tool's 'share' subcommand (after setting up the Uplink CLI tool), or you can make one as follows: In the above example, 'serializedAccess' is a human-readable string that represents read-only access to just the "logs" bucket, and is only able to decrypt that one bucket thanks to hierarchical deterministic key derivation. Note: RequestAccessWithPassphrase is CPU-intensive, and your application's normal lifecycle should avoid it and use ParseAccess where possible instead. To revoke an access grant see the Project.RevokeAccess method. A common architecture for building applications is to have a single bucket for the entire application to store the objects of all users. In such architecture, it is of utmost importance to guarantee that users can access only their objects but not the objects of other users. This can be achieved by implementing an app-specific authentication service that generates an access grant for each user by restricting the main access grant of the application. This user-specific access grant is restricted to access the objects only within a specific key prefix defined for the user. When initialized, the authentication server creates the main application access grant with an empty passphrase as follows. The authentication service does not hold any encryption information about users, so the passphrase used to request the main application access grant does not matter. The encryption keys related to user objects will be overridden in a next step on the client-side. It is important that once set to a specific value, this passphrase never changes in the future. Therefore, the best practice is to use an empty passphrase. Whenever a user is authenticated, the authentication service generates the user-specific access grant as follows: The userID is something that uniquely identifies the users in the application and must never change. Along with the user access grant, the authentication service should return a user-specific salt. The salt must be always the same for this user. The salt size is 16-byte or 32-byte. Once the application receives the user-specific access grant and the user-specific salt from the authentication service, it has to override the encryption key in the access grant, so users can encrypt and decrypt their files with encryption keys derived from their passphrase. The user-specific access grant is now ready to use by the application. Once you have a valid access grant, you can open a Project with the access that access grant allows for. Projects allow you to manage buckets and objects within buckets. A bucket represents a collection of objects. You can upload, download, list, and delete objects of any size or shape. Objects within buckets are represented by keys, where keys can optionally be listed using the "/" delimiter. Note: Objects and object keys within buckets are end-to-end encrypted, but bucket names themselves are not encrypted, so the billing interface on the Satellite can show you bucket line items. Objects support a couple kilobytes of arbitrary key/value metadata, and arbitrary-size primary data streams with the ability to read at arbitrary offsets. If you want to access only a small subrange of the data you uploaded, you can use `uplink.DownloadOptions` to specify the download range. Listing objects returns an iterator that allows to walk through all the items:
Package validate provides methods to validate a swagger specification, as well as tools to validate data against their schema. This package follows Swagger 2.0. specification (aka OpenAPI 2.0). Reference can be found here: https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md. Validates a spec document (from JSON or YAML) against the JSON schema for swagger, then checks a number of extra rules that can't be expressed in JSON schema. Entry points: Reported as errors: Reported as warnings: The schema validation toolkit validates data against JSON-schema-draft 04 schema. It is tested against the full json-schema-testing-suite (https://github.com/json-schema-org/JSON-Schema-Test-Suite), except for the optional part (bignum, ECMA regexp, ...). It supports the complete JSON-schema vocabulary, including keywords not supported by Swagger (e.g. additionalItems, ...) Entry points: With the current version of this package, the following aspects of swagger are not yet supported: