Sequence-based Go-native audio mixer for music apps See `demo/demo.go`: Play this Demo from the root of the project, with no actual audio playback Or export WAV via stdout `> demo/output.wav`: Game audio mixers are designed to play audio spontaneously, but when the timing is known in advance (e.g. sequence-based music apps) there is a demand for much greater accuracy in playback timing. Read the API documentation at https://godoc.org/gopkg.in/mix.v0 Mix seeks to solve the problem of audio mixing for the purpose of the playback of sequences where audio files and their playback timing is known in advance. Mix stores and mixes audio in native Go `[]float64` and natively implements Paul VÃļgler's "Loudness Normalization by Logarithmic Dynamic Range Compression" (details below) Charney Kaye https://charneykaye.com XJ Music Inc. https://xj.io Even after selecting a hardware interface library such as http://www.portaudio.com/ or https://www.libsdl.org/, there remains a critical design problem to be solved. This design is a music application mixer. Most available options are geared towards Game development. Game audio mixers offer playback timing accuracy +/- 2 milliseconds. But that's totally unacceptable for music, specifically sequence-based sample playback. The design pattern particular to Game design is that the timing of the audio is not know in advance- the timing that really matters is that which is assembled in near-real-time in response to user interaction. In the field of Music development, often the timing is known in advance, e.g. a sequencer, the composition of music by specifying exactly how, when and which audio files will be played relative to the beginning of playback. Ergo, mix seeks to solve the problem of audio mixing for the purpose of the playback of sequences where audio files and their playback timing is known in advance. It seeks to do this with the absolute minimal logical overhead on top of the audio interface. Mix takes maximum advantage of Go by storing and mixing audio in native Go `[]float64` and natively implementing Paul VÃļgler's "Loudness Normalization by Logarithmic Dynamic Range Compression" (see The Mixing Algorithm below) To the Mix API, time is specified as a time.Duration-since-epoch, where the epoch is the moment that mix.Start() was called. Internally, time is tracked as samples-since-epoch at the master out playback frequency (e.g. 48000 Hz). This is most efficient because source audio is pre-converted to the master out playback frequency, and all audio maths are performed in terms of samples. Inspired by the theory paper "Mixing two digital audio streams with on the fly Loudness Normalization by Logarithmic Dynamic Range Compression" by Paul VÃļgler, 2012-04-20. This paper is published at http://www.voegler.eu/pub/audio/digital-audio-mixing-and-normalization.html. There's a demo implementation of **mix** included in the `demo/` folder in this repository. Run it using the defaults: Or specify options, e.g. using WAV bytes to stdout for playback (piped to system native `aplay`) To show the help screen: Best efforts will be made to preserve each API version in a release tag that can be parsed, e.g. http://gopkg.in/mix.v0 Mix in good health!
Package relax is a framework of pluggable components to build RESTful API's. It provides a thin layer over ânet/httpâ to serve resources, without imposing a rigid structure. It is meant to be used along âhttp.ServeMuxâ, but will work as a replacement as it implements âhttp.Handlerâ. The framework is divided into components: Encoding, Filters, Routing, Hypermedia and, Resources. These are the parts of a complete REST Service. All the components are designed to be pluggable (replaced) through interfaces by external packages. Relax provides enough built-in functionality to assemble a complete REST API. The system is based on Resource Oriented Architecture (ROA), and had some inspiration from Heroku's REST API.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package bolt implements a low-level key/value store in pure Go. It supports fully serializable transactions, ACID semantics, and lock-free MVCC with multiple readers and a single writer. Bolt can be used for projects that want a simple data store without the need to add large dependencies such as Postgres or MySQL. Bolt is a single-level, zero-copy, B+tree data store. This means that Bolt is optimized for fast read access and does not require recovery in the event of a system crash. Transactions which have not finished committing will simply be rolled back in the event of a crash. The design of Bolt is based on Howard Chu's LMDB database project. Bolt currently works on Windows, Mac OS X, and Linux. There are only a few types in Bolt: DB, Bucket, Tx, and Cursor. The DB is a collection of buckets and is represented by a single file on disk. A bucket is a collection of unique keys that are associated with values. Transactions provide either read-only or read-write access to the database. Read-only transactions can retrieve key/value pairs and can use Cursors to iterate over the dataset sequentially. Read-write transactions can create and delete buckets and can insert and remove keys. Only one read-write transaction is allowed at a time. The database uses a read-only, memory-mapped data file to ensure that applications cannot corrupt the database, however, this means that keys and values returned from Bolt cannot be changed. Writing to a read-only byte slice will cause Go to panic. Keys and values retrieved from the database are only valid for the life of the transaction. When used outside the transaction, these byte slices can point to different data or can point to invalid memory which will cause a panic.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package log is a drop-in replacement for the standard Go logging library "log" which is fully source code compatible support all the standard library API while at the same time offering advanced logging features through an extended API. The design goals of gonelog was: Out of the box the default logger with package level methods works like the standard library *log.Logger with all the standard flags and methods: Under the hood the default *log.Logger is however a log context object which can have key/value data and which generate log events with a syslog level and hands them of to a log Handler for formatting and output. The default Logger supports all this as well, using log level constants source code compatible with the "log/syslog" package through the github.com/One-com/gone/log/syslog package: Logging with key/value data is (in its most simple form) done by calling level specific functions. First argument is the message, subsequenct arguments key/value data: Earch *log.Logger object has a current "log level" which determines the maximum log level for which events are actually generated. Logging above that level will be ignored. This log level can be controlled: Calling Fatal*() and Panic*() will in addition to Fataling/panicing log at level ALERT. The Print*() methods will log events with a configurable "default" log level - which default to INFO. Per default the Logger *will* generate log event for Print*() calls even though the log level is lower. The Logger can be set to respect the actual log level also for Print*() statements by the second argument to SetPrintLevel() A new custom Logger with its own behavior and formatting handler can be created: A customer Logger will not per default spend time timestamping events or registring file/line information. You have to enable that explicitly (it's not enabled by setting the flags on a formatting handler). When having key/value data which you need to have logged in all log events, but don't want to remember to put into every log statement, you can create a "child" Logger: To simply set the standard logger in a minimal mode where it only outputs <level>message to STDOUT and let an external daemon supervisor/log system do the rest (including timestamping) just do: Having many log statements can be expensive. Especially if the arguments to be logged are resource intensive to compute and there's no log events generated anyway. There are 2 ways to get around that. The first is do do Lazy evaluation of arguments: The other is to pick an comma-ok style log function: Sometimes it can be repetitive to make a lot of log statements logging many attributes of the same kinda of object by explicitly accessing every attribute. To make that simpler, every object can implement the Logable interface by creating a LogValues() function returning the attributes to be logged (with keys). The object can then be logged by directly providing it as an argument to a log function: Loggers can have names, placing them in a global "/" separated hierarchy. It's recommended to create a Logger by mentioning it by name using GetLogger("logger/name") - instead of creating unnamed Loggers with NewLogger(). If such a logger exists you will get it returned, so you can configure it and set the formatter/output. Otherwise a new logger by that name is created. Libraries are encouraged to published the names of their Loggers and to name Loggers after their Go package. This works exactly like the Python "logging" library - with one exception: When Logging an event at a Logger the tree of Loggers by name are only traversed towards to root to find the first Logger having a Handler attached, not returning an error. The log-event is then sent to that handler. If that handler returns an error, the parent Logger and its Handler is tried. This allows to contruct a "Last Resort" parent for errors in the default log Handler. The Python behaviour is to send the event to all Handlers found in the Logger tree. This is not the way it's done here. Only one Handler will be given the event to log. If you wan't more Handlers getting the event, use a MultiHandler. Happy logging.
Package bubo provides a framework for building and orchestrating AI agents with a focus on reliability, extensibility, and maintainable workflows. The package implements a robust foundation for creating AI-powered applications through several key abstractions: A typical workflow involves creating agents, defining their capabilities, and orchestrating their interactions: The package is built around several core concepts: 1. Execution Engine (execution.go) 2. Hooks (hook.go) 3. Promises (promise.go) 4. Tasks (task.go) 5. Knots (knot.go) Bubo integrates with several backend systems: The examples directory contains various usage patterns: See the examples directory for complete implementations. 1. Agent Design 2. Workflow Management 3. Tool Implementation 4. Error Handling The package is designed to be thread-safe when used correctly: For more information about specific components, see their respective documentation: Package bubo provides a framework for building conversational AI agents that can interact in a structured manner. It supports multi-agent conversations, structured output, and flexible execution contexts. The execution package is a core component that handles: Key concepts: Package bubo provides a framework for building conversational AI agents that can interact in a structured manner. It supports multi-agent conversations, structured output, and flexible execution contexts. Package bubo provides a framework for building conversational AI agents that can interact in a structured manner. It supports multi-agent conversations, structured output, and flexible execution contexts. Package bubo provides a framework for building conversational AI agents that can interact in a structured manner. It supports multi-agent conversations, structured output, and flexible execution contexts. Package bubo provides a framework for building conversational AI agents that can interact in a structured manner. It supports multi-agent conversations, structured output, and flexible execution contexts.
Package skogul is a framework for receiving, processing and forwarding data, typically metric data or event-oriented data, at high throughput. It is designed to be as agnostic as possible with regards to how it transmits data and how it receives it, and the processors in between need not worry about how the data got there or how it will be treated in the next chain. This means you can use Skogul to receive data on a influxdb-like line-based TCP interface and send it on to postgres - or influxdb - without having to write explicit support, just set up the chain. The guiding principles of Skogul is: - Make as few assumptions as possible about how data is received - Be stupid fast End users should only need to worry about the cmd/skogul tool, which comes fully equipped with self-contained documentation. Adding new logic to Skogul should also be fairly easy. New developers should focus on understanding two things: 1. The skogul.Container data structure - which is the heart of Skogul. 2. The relationship from receiver to handler to sender. The Container is documented in this very package. Receivers are where data originates within Skogul. The typical Receiver will receive data from the outside world, e.g. by other tools posting data to a HTTP endpoint. Receivers can also be used to "create" data, either test data or, for example, log data. When skogul starts, it will start all receivers that are configured. Handlers determine what is done with the data once received. They are responsible for parsing raw data and optionally transform it. This is the only place where it is allowed to _modify_ data. Today, the only transformer is the "templater", which allows a collection of metrics which share certain attributes (e.g.: all collected at the same time and from the same machine) to provide these shared attributes in a template which the "templater" transformer then applies to all metrics. Other examples of transformations that make sense are: - Adding a metadata field - Removing a metadata field - Removing all but a specific set of fields - Converting nested metrics to multiple metrics or flatten them Once a handler has done its deed, it sends the Container to the sender, and this is where "the fun begins" so to speak. Senders consist of just a data structure that implements the Send() interface. They are not allowed to change the container, but besides that, they can do "whatever". The most obvious example is to send the container to a suitable storage system - e.g., a time series database. So if you want to add support for a new time series database in Skogul, you will write a sender. In addition to that, many senders serve only to add internal logic and pass data on to other senders. Each sender should only do one specific thing. For example, if you want to write data both to InfluxDB and MySQL, you need three senders: The "MySQL" and "InfluxDB" senders, and the "dupe" sender, which just takes a list of other senders and sends whatever it receives on to all of them. Today, Senders and Receivers both have an identical "Auto"-system, found in auto.go of the relevant directories. This is how the individual implementations are made discoverable to the configuration system, and how documentation is provided. Documentation for the settings of a sender/receiver is handled as struct tags. Once more parsers/transformers are added, they will likely also use a similar system.
Package hotkey provides the basic facility to register a system-level global hotkey shortcut so that an application can be notified if a user triggers the desired hotkey. A hotkey must be a combination of modifiers and a single key. Note platform specific details: On macOS, due to the OS restriction (other platforms does not have this restriction), hotkey events must be handled on the "main thread". Therefore, in order to use this package properly, one must start an OS main event loop on the main thread, For self-contained applications, using mainthread package. is possible. It is uncessary or applications based on other GUI frameworks, such as fyne, ebiten, or Gio. See the "examples" for more examples. On Linux (X11), when AutoRepeat is enabled in the X server, the Keyup is triggered automatically and continuously as Keydown continues. On Linux (X11), some keys may be mapped to multiple Mod keys. To correctly register the key combination, one must use the correct underlying keycode combination. For example, a regular Ctrl+Alt+S might be registered as: Ctrl+Mod2+Mod4+S. If this package did not include a desired key, one can always provide the keycode to the API. For example, if a key code is 0x15, then the corresponding key is `hotkey.Key(0x15)`. THe following is a minimum example:
nxadm/tail provides a Go library that emulates the features of the BSD `tail` program. The library comes with full support for truncation/move detection as it is designed to work with log rotation tools. The library works on all operating systems supported by Go, including POSIX systems like Linux and *BSD, and MS Windows. Go 1.9 is the oldest compiler release supported.
Package conl implements CONL parsing and serializing. CONL is a post-modern, human-centric configuration language. It is designed to be a easy to read, easy to edit, and easy to parse, in that order. It uses a JSON-like structure of scalars, maps, and lists; an INI-like type system to defer scalar parsing until runtime; and a (simplified) YAML-like syntax for indentation. Like the builtin json package, CONL can automatically convert between Go types and CONL values. For example, you could parse the above document into a struct defined in Go as: If your type implements the encoding.TextMarshaler and encoding.TextUnmarshaler then CONL will use that to convert between a scalar and your type, otherwise scalars are parsed using the strconv package. Package conl supports a very similar set of Go types to encoding/json. In particular, any string, number, or boolean value can be serialized; as can any struct, map, array, or slive of such values. On the flip side, channels and functions cannot be serialized. Unlike json, conl allows map keys to be numbers, bools, or arrays or structs of those types in addition to strings.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/master/_examples Validator is designed to be thread-safe and used as a singleton instance. It caches information about your struct and validations, in essence only parsing your validation tags once per struct type. Using multiple instances neglects the benefit of caching. The not thread-safe functions are explicitly marked as such in the documentation. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rgb|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. Allows to skip the validation if the value is nil (same as omitempty, but only for the nil-values). This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For booleans ensures value is not false. For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value when using WithRequiredStructEnabled. The field under validation must be present and not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must not be present or not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must not be present or empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, len will ensure that the value is equal to the duration given in the parameter. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, max will ensure that the value is less than or equal to the duration given in the parameter. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, min will ensure that the value is greater than or equal to the duration given in the parameter. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, eq will ensure that the value is equal to the duration given in the parameter. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, ne will ensure that the value is not equal to the duration given in the parameter. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. To match strings with spaces in them, include the target string between single quotes. Kind of like an 'enum'. Works the same as oneof but is case insensitive and therefore only accepts strings. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gt will ensure that the value is greater than the duration given in the parameter. Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gte will ensure that the value is greater than or equal to the duration given in the parameter. For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lt will ensure that the value is less than the duration given in the parameter. Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lte will ensure that the value is less than or equal to the duration given in the parameter. This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value can successfully be parsed into a boolean with strconv.ParseBool This validates that a string value contains number values only. For integers or float it returns true. This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains only lowercase characters. An empty string is not a valid lowercase string. This validates that a string value contains only uppercase characters. An empty string is not a valid uppercase string. This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid E.164 Phone number https://en.wikipedia.org/wiki/E.164 (ex. +1123456789) This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value is valid JSON This validates that a string value is a valid JWT This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid file path and that the file exists on the machine and is an image. This is done using os.Stat and github.com/gabriel-vasile/mimetype This validates that a string value contains a valid file path but does not validate the existence of that file. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid bas324 value. Although an empty string is valid base32 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value, but without = padding, according the RFC4648 spec, section 3.2. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providing reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value does not start with the supplied string value This validates that a string value does not end with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains a valid ULID value. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid directory but does not validate the existence of that directory. This is done using os.Stat, which is a platform independent function. It is safest to suffix the string with os.PathSeparator if the directory may not exist at the time of validation. This validates that a string value contains a valid DNS hostname and port that can be used to validate fields typically passed to sockets and connections. This validates that a string value is a valid datetime based on the supplied datetime format. Supplied format must match the official Go time format layout as documented in https://golang.org/pkg/time/ This validates that a string value is a valid country code based on iso3166-1 alpha-2 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-3 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-numeric standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid BCP 47 language tag, as parsed by language.Parse. More information on https://pkg.go.dev/golang.org/x/text/language BIC (SWIFT code) This validates that a string value is a valid Business Identifier Code (SWIFT code), defined in ISO 9362. More information on https://www.iso.org/standard/60390.html This validates that a string value is a valid dns RFC 1035 label, defined in RFC 1035. More information on https://datatracker.ietf.org/doc/html/rfc1035 This validates that a string value is a valid time zone based on the time zone database present on the system. Although empty value and Local value are allowed by time.LoadLocation golang function, they are not allowed by this validator. More information on https://golang.org/pkg/time/#LoadLocation This validates that a string value is a valid semver version, defined in Semantic Versioning 2.0.0. More information on https://semver.org/ This validates that a string value is a valid cve id, defined in cve mitre. More information on https://cve.mitre.org/ This validates that a string value contains a valid credit card number using Luhn algorithm. This validates that a string or (u)int value contains a valid checksum using the Luhn algorithm. This validates that a string is a valid 24 character hexadecimal string or valid connection string. Example: This validates that a string value contains a valid cron expression. This validates that a string is valid for use with SpiceDb for the indicated purpose. If no purpose is given, a purpose of 'id' is assumed. Alias Validators and Tags NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package ho provides automated hyperparameter optimization using Bayesian optimization with Gaussian Processes. It offers efficient, thread-safe optimization capabilities for tuning system parameters with minimal manual intervention. The package includes the following key features: To install the package, use: The library provides four acquisition functions for different optimization strategies: 1. Upper Confidence Bound (UCB): Balances exploration and exploitation Controlled by Beta parameter (higher = more exploration) Default choice, works well in most cases config := DefaultConfig() // Uses UCB by default config.AcqParams.Beta = 2.0 // Adjust exploration-exploitation trade-off 2. Probability of Improvement (PI): Conservative exploration strategy Focuses on small, reliable improvements Good for noise-sensitive applications config := DefaultConfig() config.AcquisitionFunc = ProbabilityOfImprovement config.AcqParams.Xi = 0.01 // Minimum improvement threshold 3. Expected Improvement (EI): Balances improvement probability and magnitude Most commonly used in practice Good for general optimization tasks config := DefaultConfig() config.AcquisitionFunc = ExpectedImprovement config.AcqParams.Xi = 0.01 // Minimum improvement threshold 4. Thompson Sampling: Simple but effective random sampling approach Great for parallel optimization No parameter tuning required config := DefaultConfig() config.AcquisitionFunc = ThompsonSampling config.AcqParams.RandomState = rand.New(rand.NewSource(time.Now().UnixNano())) The OptimizationConfig struct allows customization of the optimization process: Recommended settings: All components are designed to be thread-safe: To contribute to the project:
Package hord provides a simple and extensible interface for interacting with various database systems in a uniform way. Hord is designed to be a database-agnostic library that provides a common interface for interacting with different database systems. It allows developers to write code that is decoupled from the underlying database technology, making it easier to switch between databases or support multiple databases in the same application. To use Hord, import it as follows: To create a database client, you need to import and use the appropriate driver package along with the `hord` package. For example, to use the Redis driver: Each driver provides its own `Dial` function to establish a connection to the database. Refer to the specific driver documentation for more details. Once you have a database client, you can use it to perform various database operations. The API is consistent across different drivers. Refer to the `hord.Database` interface documentation for a complete list of available methods. Hord provides common error types and constants for consistent error handling across drivers. Refer to the `hord` package documentation for more information on error handling. Contributions to Hord are welcome! If you want to add support for a new database driver or improve the existing codebase, please refer to the contribution guidelines in the project's repository.
Package KSbus is a minimalistic event bus demonstration on how useful is to design event based systems using Golang, allowing you to synchronise your backends and frontends This package use [Kmux] wich is the same router as Korm
Package raggo provides a high-level interface for text chunking and token management, designed for use in retrieval-augmented generation (RAG) applications. Package raggo provides utilities for concurrent document loading and processing. Package raggo provides advanced Retrieval-Augmented Generation (RAG) capabilities with contextual awareness and memory management. Package raggo provides advanced Retrieval-Augmented Generation (RAG) capabilities with contextual awareness and memory management. Package raggo provides a high-level interface for text embedding and retrieval operations in RAG (Retrieval-Augmented Generation) systems. It simplifies the process of converting text into vector embeddings using various providers. Package raggo provides a high-level interface for document loading and processing in RAG (Retrieval-Augmented Generation) systems. The loader component handles various input sources with support for concurrent operations and configurable behaviors. Package raggo provides a high-level logging interface for the Raggo framework, built on top of the core rag package logging system. It offers: Package raggo provides advanced context-aware retrieval and memory management capabilities for RAG (Retrieval-Augmented Generation) systems. Package raggo provides a flexible and extensible document parsing system for RAG (Retrieval-Augmented Generation) applications. The system supports multiple file formats and can be extended with custom parsers. Package raggo implements a comprehensive Retrieval-Augmented Generation (RAG) system that enhances language models with the ability to access and reason over external knowledge. The system seamlessly integrates vector similarity search with natural language processing to provide accurate and contextually relevant responses. The package offers two main interfaces: The RAG system works by: 1. Processing documents into semantic chunks 2. Storing document vectors in a configurable database 3. Finding relevant context through similarity search 4. Generating responses that combine retrieved context with queries Key Features: Example Usage: Package raggo provides a comprehensive registration system for vector database implementations in RAG (Retrieval-Augmented Generation) applications. This package enables dynamic registration and management of vector databases with support for concurrent operations, configurable processing, and extensible architecture. Package raggo implements a sophisticated document retrieval system that combines vector similarity search with optional reranking strategies. The retriever component serves as the core engine for finding and ranking relevant documents based on semantic similarity and other configurable criteria. Key features: SimpleRAG provides a minimal, easy-to-use interface for RAG operations. It simplifies the configuration and usage of the RAG system while maintaining core functionality. This implementation is ideal for: Example usage: Package raggo provides a high-level abstraction over various vector database implementations. This file defines the VectorDB type, which wraps the lower-level rag.VectorDB interface with additional functionality and type safety.
Package partnercentralselling provides the API client, operations, and parameter types for Partner Central Selling API. This Amazon Web Services (AWS) Partner Central API reference is designed to help AWS Partnersintegrate Customer Relationship Management (CRM) systems with AWS Partner Central. Partners can automate interactions with AWS Partner Central, which helps to ensure effective engagements in joint business activities. The API provides standard AWS API functionality. Access it by either using API Actions or by using an AWS SDK that's tailored to your programming language or platform. For more information, see Getting Started with AWSand Tools to Build on AWS. Features offered by AWS Partner Central API Opportunity management: Manages coselling opportunities through API actions such as CreateOpportunity , UpdateOpportunity , ListOpportunities , GetOpportunity , and AssignOpportunity . AWS referral management: Manages referrals shared by AWS using actions such as ListEngagementInvitations , GetEngagementInvitation , StartEngagementByAcceptingInvitation , and RejectEngagementInvitation . Entity association: Associates related entities such as AWS Products, Partner Solutions, and AWS Marketplace Private Offers with opportunities using the actions AssociateOpportunity , and DisassociateOpportunity . View AWS opportunity details: Retrieves real-time summaries of AWS opportunities using the GetAWSOpportunitySummary action. List solutions: Provides list APIs for listing partner offers using ListSolutions . Event subscription: Subscribe to real-time opportunity updates through AWS EventBridge by using actions such as Opportunity Created, Opportunity Updated, Engagement Invitation Accepted, Engagement Invitation Rejected, and Engagement Invitation Created.
Package radir implements a large subset of "The OID Directory" -- an EXPERIMENTAL Internet-Draft (I-D) series by Jesse Coretta. The Internet-Drafts (henceforth referred to as "the I-D series") is made up of the following individual drafts: These drafts can be viewed on the IETF Data Tracker site, or via the official OID Directory site and GitHub repositories. At present, the current revisions are set to expire on February 23, 2025. The I-D series, and by necessity this package, is thoroughly EXPERIMENTAL. It is not yet approved by the IETF, and thus should NEVER be used in any capacity beyond proof-of-concept or work-in-progress efforts. This package is an abstract, general-use framework supplement. It will aid in the marshaling and unmarshaling of OID registration and registrant (contact) constructs, whether using a proper go-ldap/v3 Entry instance, or through manual assembly. The package can aid in the bidirectional conversion of certain values, such as "dotNotation" and "dn" values, and offers many other useful features in service to the I-D series mentioned above. Implementations which use this package may be of a server-side or client-side nature, or neither. There is no singular use-case for this package. TLDR; its a nifty toolbox; what you build is what the package serves. As the terms are defined throughout the OID Directory I-D series, this package is absolutely not a complete DUA, DIT or DSA. While it can serve as a valuable component in such constructs, its current state does not allow drop-in functionality of that nature, nor was this intended. For instance, those designing a compliant RA DUA, per the RADUA I-D, are expected to install and utilize the go-ldap/v3 package on their own terms and in service to their particular environment or infrastructure. This is done to maximize compatibility across the many potential use-cases and directory products, as well as to limit potential security vulnerabilities relating to this package itself. This approach also has the secondary effect of making potential integration efforts much simpler and far less disruptive. TLDR; this package works with go-ldap/v3, but it does NOT import it directly. Do it yourself. Thanks to the GetOrSetFunc closure type, this package is supremely extensible. Virtually all Registration and Registrant methods -- such as Registration.SetDN or FirstAuthority.POBoxGetFunc -- allow for closure-based behavioral overrides. This allows limitless control over how values manifest during presentation, as well as how they are written to instances of the aforementioned types. For additional information, see the GetOrSetFunc type documentation, as well as the package examples for all methods which allow input of instances of this type. See also the next section regarding storage space considerations with regards to especially -- and unnecessarily -- large values. TLDR; Control value I/O using a closure signature of "func(...any) (any, error)" (GetOrSetFunc) for any "Set<X>" or "<X>GetFunc" methods. Per Section 2.2.3.4 of the RADUA I-D, this package provides a thread-safe, memory-based Cache facility for use by a client. The primary purpose of this facility is to cache, or store temporarily, all *Registration and *Registrant instances that have either been crafted manually, or marshaled by way of a go-ldap/v3 entry instance. While crude, it can help provide considerable I/O savings in terms of LDAP search requests, which may or may not be transmitted over-the-wire. Lifespans of cached entries is directed by manual specification (e.g.: by the end user), or by way of a literal or collectively-inherited TTL obtained within the RA DIT or via the appropriate *DITProfile instance as a global fallback. See the aforementioned RA DUA section for details regarding TTL precedence and other mechanics. Use of this facility is not required to comply with the aforementioned specification. Adopters may freely supplant the package-provided Cache with a caching system of their own choosing or design. TLDR; Caching eligible instances reduces network (LDAP) I/O at the expense of memory. You can use the Cache type, or a third-party one, or abstain from caching entirely. The I-D series offers two (2) directory models in terms of Registration structure and layout, each of which are implemented in this package. The two dimensional model is discussed in Section 3.1.2 of the RADIT I-D. The three dimensional model is discussed in Section 3.1.3 of the RADIT I-D. In most scenarios, use of the three dimensional model is the preferred strategy. TLDR; Use the ThreeDimensional directory model. The I-D series offers two (2) registrant entry policies, each of which are implemented in this package. Dedicated registrants are covered in Section 3.2.1.1.1 of the RADIT I-D. Combined registrants are briefly covered in Section 3.2.1.1.2 of the RADIT I-D. In most scenarios, use of dedicated registrants is the preferred strategy. TLDR; Use *Registrant instances instead of "combining" registrant content with *Registration instances (in-line). As stated in Section 3.2.1.1.1 of the RADIT I-D, it is possible to forego use of the draft-based authority types, such as "sponsorCommonName", in favor of the traditional "cn" type. This logic applies may extend to either "Combined" or "Dedicated" Registrant Policies. See the DITProfile.UseAltAuthorityTypes method for a means of enabling this behavior. Note there are caveats with either standpoint, and thus the reader is advised to review the aforementioned section of the draft to ensure they understand the ramifications of their decision. Please also note it is inadvisable to change this value without a good reason, and inappropriate alteration will result in degraded client behavior and likely a deviation from the established content policies enforced within the directory. You have been warned. See the FirstAuthorityAltAttributeTypes, CurrentAuthorityAltAttributeTypes and SponsorAltAttributeTypes map variables for a complete list of the types that are -- and are not -- subject to the influence of the aforementioned method. TLDR; You may use, for example, "cn" instead of "sponsorCommonName" ... but there are caveats. This package makes conversion (in either direction) between go-ldap/v3 Entry and *Registration or *Registrant instances a breeze! When unmarshaling FROM an instance of go-ldap/v3 Entry TO an instance of *Registration, rather than using the go-ldap/v3 Entry.Unmarshal method directly, simply feed the method to *Registration.Marshal to achieve the same effect: This is necessary because the go-ldap/v3 Entry.Unmarshal method only supports a limited number of struct field value types. To get around this issue, radir simply performs independent marshaling upon any individual struct components within the destination instance (*Registration). In other words, if there are four fields that contain struct values, each of these fields is marshaled independently. This ensures that all of the needed attribute values are collected from the source go-ldap/v3 Entry instance. When unmarshaling FROM an instance of *Registration (or *Registrant) TO an instance of go-ldap/v3 Entry, simply use the Registration.Unmarshal (or Registrant.Unmarshal) method. Feed the output to the go-ldap/v3 NewEntry function: TLDR; Excellent marshal and unmarshal features. And while go-ldap/v3 Entry.Unmarshal is very limited, we have a most suitable workaround: don't "use" it, let us handle it for you. OIDs are virtually infinite in size. Large pools of sibling registrations can be exceedingly difficult to navigate manually; the sequence of number forms may not be contiguous, and there is no guarantee the entries which bear these values will be ordered correctly in directory search results. To that end, the "spatialContext" AUXILIARY class defined within the I-D series is implemented within this package as the *Spatial struct type. Use of this type can help mitigate some of this tedium by allowing any given registration entry to bear direct DN-based references to other spatially-relevant registrations. Specifically, this produces an abstraction of directional movement in the following contexts: Non-collective *Spatial attribute types may be set manually, or they may be present within entries marshaled into Registration instances as literal or collective values. Collective values are not meant for manual assignment, thus no related "set" methods exist in that regard. Like virtually all other methods in this package, the relevant *Spatial methods allow for GetOrSetFunc closure use, thereby letting the user enhance the behavior of instances of this type in a variety of ways: TLDR; RA DIT navigation with a "đšī¸" duct-taped on to it.
Idefix-Go is the official client library for accessing Idefix, the backend of Nayar System's products. This library is designed in Go and facilitates seamless interaction with Idefix using the MQTT protocol. While Idefix can also be accessed via the HTTP Bridge, MQTT is the preferred method due to its advantages in speed and efficiency. Key features of the Idefix-Go library include:
Package udecimal provides a high-performance, high-precision (up to 19 digits after the decimal point) decimal arithmetic library. It includes functions for parsing and performing arithmetic operations such as addition, subtraction, multiplication, and division on decimal numbers. The package is designed to handle decimal numbers with a high degree of precision and efficiency, making it suitable for high-traffic financial applications where both precision and performance are critical. Maximum and default precision is 19 digits after the decimal point. The default precision can be changed globally to any value between 1 and 19 to suit your use case and make sure that the precision is consistent across the entire application. See SetDefaultPrecision for more details. The udecimal package supports various encoding and decoding mechanisms to facilitate easy integration with different data storage and transmission systems. For more details, see the documentation for each method.
Package cryptipass provides a flexible and secure password generation system that creates pronounceable passphrases based on a probabilistic model. It is designed for security-conscious developers who need to generate strong, memorable passwords. The package supports custom word lists and pattern-based password generation, allowing users to tailor the output to their needs.
Package cron implements a cron spec parser and job runner. To download the specific tagged release, run: Import it in your program as: It requires Go 1.11 or later due to usage of Go Modules. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 5 space-separated fields. Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. The specific interpretation of the format is based on the Cron Wikipedia page: https://en.wikipedia.org/wiki/Cron Alternative Cron expression formats support other fields like seconds. You can implement that by creating a custom Parser as follows. Since adding Seconds is the most common modification to the standard cron spec, cron provides a builtin function to do that, which is equivalent to the custom parser you saw earlier, except that its seconds field is REQUIRED: That emulates Quartz, the most popular alternative Cron schedule format: http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/crontrigger.html Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates after 1 hour, 30 minutes, 10 seconds, and then every interval after that. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. By default, all interpretation and scheduling is done in the machine's local time zone (time.Local). You can specify a different time zone on construction: Individual cron schedules may also override the time zone they are to be interpreted in by providing an additional space-separated field at the beginning of the cron spec, of the form "CRON_TZ=Asia/Tokyo". For example: The prefix "TZ=(TIME ZONE)" is also supported for legacy compatibility. Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! A Cron runner may be configured with a chain of job wrappers to add cross-cutting functionality to all submitted jobs. For example, they may be used to achieve the following effects: Install wrappers for all jobs added to a cron using the `cron.WithChain` option: Install wrappers for individual jobs by explicitly wrapping them: Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron defines a Logger interface that is a subset of the one defined in github.com/go-logr/logr. It has two logging levels (Info and Error), and parameters are key/value pairs. This makes it possible for cron logging to plug into structured logging systems. An adapter, [Verbose]PrintfLogger, is provided to wrap the standard library *log.Logger. For additional insight into Cron operations, verbose logging may be activated which will record job runs, scheduling decisions, and added or removed jobs. Activate it with a one-off logger as follows: Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Package cryptipass provides a flexible and secure password generation system that creates pronounceable passphrases based on a probabilistic model. It is designed for security-conscious developers who need to generate strong, memorable passwords. The package supports custom word lists and pattern-based password generation, allowing users to tailor the output to their needs.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package tusd provides ways to accept tus 1.0 calls using HTTP. tus is a protocol based on HTTP for resumable file uploads. Resumable means that an upload can be interrupted at any moment and can be resumed without re-uploading the previous data again. An interruption may happen willingly, if the user wants to pause, or by accident in case of an network issue or server outage (http://tus.io). tusd was designed in way which allows an flexible and customizable usage. We wanted to avoid binding this package to a specific storage system â particularly a proprietary third-party software. Therefore tusd is an abstract layer whose only job is to accept incoming HTTP requests, validate them according to the specification and finally passes them to the data store. The data store is another important component in tusd's architecture whose purpose is to do the actual file handling. It has to write the incoming upload to a persistent storage system and retrieve information about an upload's current state. Therefore it is the only part of the system which communicates directly with the underlying storage system, whether it be the local disk, a remote FTP server or cloud providers such as AWS S3. The only hard requirements for a data store can be found in the DataStore interface. It contains methods for creating uploads (NewUpload), writing to them (WriteChunk) and retrieving their status (GetInfo). However, there are many more features which are not mandatory but may still be used. These are contained in their own interfaces which all share the *DataStore suffix. For example, GetReaderDataStore which enables downloading uploads or TerminaterDataStore which allows uploads to be terminated. The store composer offers a way to combine the basic data store - the core - implementation and these additional extensions: The corresponding methods for adding an extension to the composer are prefixed with Use* followed by the name of the corresponding interface. However, most data store provide multiple extensions and adding all of them manually can be tedious and error-prone. Therefore, all data store distributed with tusd provide an UseIn() method which does this job automatically. For example, this is the S3 store in action (see S3Store.UseIn): Finally, once you are done with composing your data store, you can pass it inside the Config struct in order to create create a new tusd HTTP handler: This handler can then be mounted to a specific path, e.g. /files: