Package goa implements a Go framework for writing microservices that promotes best practice by providing a single source of truth from which server code, client code, and documentation is derived. The code generated by goa follows the clean architecture pattern where composable modules are generated for the transport, endpoint, and business logic layers. The goa package contains middleware, plugins, and other complementary functionality that can be leveraged in tandem with the generated code to implement complete microservices in an efficient manner. By using goa for developing microservices, implementers don’t have to worry with the documentation getting out of sync from the implementation as goa takes care of generating OpenAPI specifications for HTTP based services and gRPC protocol buffer files for gRPC based services (or both if the service supports both transports). Reviewers can also be assured that the implementation follows the documentation as the code is generated from the same source. Visit https://goa.design for more information.
Package keyring provides a uniform API over a range of desktop credential storage engines.
Package goa implements a Go framework for writing microservices that promotes best practice by providing a single source of truth from which server code, client code, and documentation is derived. The code generated by goa follows the clean architecture pattern where composable modules are generated for the transport, endpoint, and business logic layers. The goa package contains middleware, plugins, and other complementary functionality that can be leveraged in tandem with the generated code to implement complete microservices in an efficient manner. By using goa for developing microservices, implementers don’t have to worry with the documentation getting out of sync from the implementation as goa takes care of generating OpenAPI specifications for HTTP based services and gRPC protocol buffer files for gRPC based services (or both if the service supports both transports). Reviewers can also be assured that the implementation follows the documentation as the code is generated from the same source. Visit https://goa.design for more information.
Package clipboard provides cross platform clipboard access and supports macOS/Linux/Windows/Android/iOS platform. Before interacting with the clipboard, one must call Init to assert if it is possible to use this package: The most common operations are `Read` and `Write`. To use them: Note that read/write regarding image format assumes that the bytes are PNG encoded since it serves the alpha blending purpose that might be used in other graphical software. In addition, `clipboard.Write` returns a channel that can receive an empty struct as a signal, which indicates the corresponding write call to the clipboard is outdated, meaning the clipboard has been overwritten by others and the previously written data is lost. For instance: You can ignore the returning channel if you don't need this type of notification. Furthermore, when you need more than just knowing whether clipboard data is changed, use the watcher API:
Packages in the Plugin directory contain plugins for [goa v2](https://godoc.org/goa.design/goa/v3). Plugins can extend the goa DSL, generate new artifacts and modify the output of existing generators. There are currently two plugins in the directory: The [goakit](https://godoc.org/goa.design/plugins/goakit) plugin generates code that integrates with the [go-kit](https://github.com/go-kit/kit) library. The [cors](https://godoc.org/goa.design/plugins/cors) plugin adds new DSL to define CORS policies. The plugin generates HTTP server code that respond to CORS requests in compliance with the policies defined in the design. Writing a plugin consists of two steps: 1. Writing the functions that gets called by the goa tool during code generation. 2. Registering the functions with the goa tool. A plugin may implement the "gen" goa command, the "example" goa command or both. In each case a plugin may register one or two functions: the first function called "Prepare" gets called prior to any code generation actually happening. The function can modify the design data structures before goa uses them to generate code. The second function called "Generate" does the actual code generation. The signature of the Generate function is: where: The function must return the entire set of generated files (even the files that the plugin does not modify). The functions must then be registered with the goa code generator tool using the "RegisterPlugin" function of the "codegen" package. This is typically done in a package "init" function, for example: The first argument of RegisterPlugin must be one of "gen" or "example" and specifies the "goa" command that triggers the call to the plugin functions. The second argument is the Prepare function if any, nil otherwise. The last argument is the code generator function if any, nil otherwise. A plugin may introduce new DSL "keywords" (typically Go package functions). The DSL functions initialize the content of a design root object (an object that implements the "eval.Root" interface), for example: The [eval](https://godoc.org/goa.design/goa/v3/codegen/eval) package contains a number of functions that can be leveraged to implement the DSL such as error reporting functions. The plugin design root object is then given to the plugin Generate function (if there's one) which may take advantage of it to generate the code.
Package lockfree offers lock-free utilities
httpsignatures is a golang implementation of the http-signatures spec found at https://tools.ietf.org/html/draft-cavage-http-signatures
Package irma contains generic IRMA strucs and logic of use to all IRMA participants. It parses irma_configuration folders to scheme managers, issuers, credential types and public keys; it contains various messages from the IRMA protocol; it parses IRMA metadata attributes; and it contains attribute and credential verification logic.
Package cli provides a minimal framework for creating and organizing command line Go applications. cli is designed to be easy to understand and write, the most simple cli application can be written as follows: Of course this application does not do much, so let's make this an actual application:
Package cli provides a minimal framework for creating and organizing command line Go applications. cli is designed to be easy to understand and write, the most simple cli application can be written as follows: Of course this application does not do much, so let's make this an actual application:
Package fasthttp provides fast HTTP server and client API. Fasthttp provides the following features: Optimized for speed. Easily handles more than 100K qps and more than 1M concurrent keep-alive connections on modern hardware. Optimized for low memory usage. Easy 'Connection: Upgrade' support via RequestCtx.Hijack. Server provides the following anti-DoS limits: - The number of concurrent connections. - The number of concurrent connections per client IP. - The number of requests per connection. - Request read timeout. - Response write timeout. - Maximum request header size. - Maximum request body size. - Maximum request execution time. - Maximum keep-alive connection lifetime. - Early filtering out non-GET requests. A lot of additional useful info is exposed to request handler: - Server and client address. - Per-request logger. - Unique request id. - Request start time. - Connection start time. - Request sequence number for the current connection. Client supports automatic retry on idempotent requests' failure. Fasthttp API is designed with the ability to extend existing client and server implementations or to write custom client and server implementations from scratch.
Package zap provides fast, structured, leveled logging. For applications that log in the hot path, reflection-based serialization and string formatting are prohibitively expensive - they're CPU-intensive and make many small allocations. Put differently, using json.Marshal and fmt.Fprintf to log tons of interface{} makes your application slow. Zap takes a different approach. It includes a reflection-free, zero-allocation JSON encoder, and the base Logger strives to avoid serialization overhead and allocations wherever possible. By building the high-level SugaredLogger on that foundation, zap lets users choose when they need to count every allocation and when they'd prefer a more familiar, loosely typed API. In contexts where performance is nice, but not critical, use the SugaredLogger. It's 4-10x faster than other structured logging packages and supports both structured and printf-style logging. Like log15 and go-kit, the SugaredLogger's structured logging APIs are loosely typed and accept a variadic number of key-value pairs. (For more advanced use cases, they also accept strongly typed fields - see the SugaredLogger.With documentation for details.) By default, loggers are unbuffered. However, since zap's low-level APIs allow buffering, calling Sync before letting your process exit is a good habit. In the rare contexts where every microsecond and every allocation matter, use the Logger. It's even faster than the SugaredLogger and allocates far less, but it only supports strongly-typed, structured logging. Choosing between the Logger and SugaredLogger doesn't need to be an application-wide decision: converting between the two is simple and inexpensive. The simplest way to build a Logger is to use zap's opinionated presets: NewExample, NewProduction, and NewDevelopment. These presets build a logger with a single function call: Presets are fine for small projects, but larger projects and organizations naturally require a bit more customization. For most users, zap's Config struct strikes the right balance between flexibility and convenience. See the package-level BasicConfiguration example for sample code. More unusual configurations (splitting output between files, sending logs to a message queue, etc.) are possible, but require direct use of go.uber.org/zap/zapcore. See the package-level AdvancedConfiguration example for sample code. The zap package itself is a relatively thin wrapper around the interfaces in go.uber.org/zap/zapcore. Extending zap to support a new encoding (e.g., BSON), a new log sink (e.g., Kafka), or something more exotic (perhaps an exception aggregation service, like Sentry or Rollbar) typically requires implementing the zapcore.Encoder, zapcore.WriteSyncer, or zapcore.Core interfaces. See the zapcore documentation for details. Similarly, package authors can use the high-performance Encoder and Core implementations in the zapcore package to build their own loggers. An FAQ covering everything from installation errors to design decisions is available at https://github.com/uber-go/zap/blob/master/FAQ.md.
Package cli provides a minimal framework for creating and organizing command line Go applications. cli is designed to be easy to understand and write, the most simple cli application can be written as follows: Of course this application does not do much, so let's make this an actual application:
Package bolt implements a low-level key/value store in pure Go. It supports fully serializable transactions, ACID semantics, and lock-free MVCC with multiple readers and a single writer. Bolt can be used for projects that want a simple data store without the need to add large dependencies such as Postgres or MySQL. Bolt is a single-level, zero-copy, B+tree data store. This means that Bolt is optimized for fast read access and does not require recovery in the event of a system crash. Transactions which have not finished committing will simply be rolled back in the event of a crash. The design of Bolt is based on Howard Chu's LMDB database project. Bolt currently works on Windows, Mac OS X, and Linux. There are only a few types in Bolt: DB, Bucket, Tx, and Cursor. The DB is a collection of buckets and is represented by a single file on disk. A bucket is a collection of unique keys that are associated with values. Transactions provide either read-only or read-write access to the database. Read-only transactions can retrieve key/value pairs and can use Cursors to iterate over the dataset sequentially. Read-write transactions can create and delete buckets and can insert and remove keys. Only one read-write transaction is allowed at a time. The database uses a read-only, memory-mapped data file to ensure that applications cannot corrupt the database, however, this means that keys and values returned from Bolt cannot be changed. Writing to a read-only byte slice will cause Go to panic. Keys and values retrieved from the database are only valid for the life of the transaction. When used outside the transaction, these byte slices can point to different data or can point to invalid memory which will cause a panic.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/master/_examples Validator is designed to be thread-safe and used as a singleton instance. It caches information about your struct and validations, in essence only parsing your validation tags once per struct type. Using multiple instances neglects the benefit of caching. The not thread-safe functions are explicitly marked as such in the documentation. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rgb|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. Allows to skip the validation if the value is nil (same as omitempty, but only for the nil-values). This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For booleans ensures value is not false. For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value when using WithRequiredStructEnabled. The field under validation must be present and not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must not be present or not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must not be present or empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, len will ensure that the value is equal to the duration given in the parameter. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, max will ensure that the value is less than or equal to the duration given in the parameter. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, min will ensure that the value is greater than or equal to the duration given in the parameter. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, eq will ensure that the value is equal to the duration given in the parameter. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, ne will ensure that the value is not equal to the duration given in the parameter. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. To match strings with spaces in them, include the target string between single quotes. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gt will ensure that the value is greater than the duration given in the parameter. Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gte will ensure that the value is greater than or equal to the duration given in the parameter. For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lt will ensure that the value is less than the duration given in the parameter. Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lte will ensure that the value is less than or equal to the duration given in the parameter. This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value can successfully be parsed into a boolean with strconv.ParseBool This validates that a string value contains number values only. For integers or float it returns true. This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains only lowercase characters. An empty string is not a valid lowercase string. This validates that a string value contains only uppercase characters. An empty string is not a valid uppercase string. This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid E.164 Phone number https://en.wikipedia.org/wiki/E.164 (ex. +1123456789) This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value is valid JSON This validates that a string value is a valid JWT This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid file path and that the file exists on the machine and is an image. This is done using os.Stat and github.com/gabriel-vasile/mimetype This validates that a string value contains a valid file path but does not validate the existence of that file. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid bas324 value. Although an empty string is valid base32 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value, but without = padding, according the RFC4648 spec, section 3.2. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providing reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value does not start with the supplied string value This validates that a string value does not end with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains a valid ULID value. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid directory but does not validate the existence of that directory. This is done using os.Stat, which is a platform independent function. It is safest to suffix the string with os.PathSeparator if the directory may not exist at the time of validation. This validates that a string value contains a valid DNS hostname and port that can be used to validate fields typically passed to sockets and connections. This validates that a string value is a valid datetime based on the supplied datetime format. Supplied format must match the official Go time format layout as documented in https://golang.org/pkg/time/ This validates that a string value is a valid country code based on iso3166-1 alpha-2 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-3 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-numeric standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid BCP 47 language tag, as parsed by language.Parse. More information on https://pkg.go.dev/golang.org/x/text/language BIC (SWIFT code) This validates that a string value is a valid Business Identifier Code (SWIFT code), defined in ISO 9362. More information on https://www.iso.org/standard/60390.html This validates that a string value is a valid dns RFC 1035 label, defined in RFC 1035. More information on https://datatracker.ietf.org/doc/html/rfc1035 This validates that a string value is a valid time zone based on the time zone database present on the system. Although empty value and Local value are allowed by time.LoadLocation golang function, they are not allowed by this validator. More information on https://golang.org/pkg/time/#LoadLocation This validates that a string value is a valid semver version, defined in Semantic Versioning 2.0.0. More information on https://semver.org/ This validates that a string value is a valid cve id, defined in cve mitre. More information on https://cve.mitre.org/ This validates that a string value contains a valid credit card number using Luhn algorithm. This validates that a string or (u)int value contains a valid checksum using the Luhn algorithm. This validates that a string is a valid 24 character hexadecimal string or valid connection string. Example: This validates that a string value contains a valid cron expression. This validates that a string is valid for use with SpiceDb for the indicated purpose. If no purpose is given, a purpose of 'id' is assumed. Alias Validators and Tags NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package cron implements a cron spec parser and job runner. To download the specific tagged release, run: Import it in your program as: It requires Go 1.11 or later due to usage of Go Modules. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 5 space-separated fields. Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. The specific interpretation of the format is based on the Cron Wikipedia page: https://en.wikipedia.org/wiki/Cron Alternative Cron expression formats support other fields like seconds. You can implement that by creating a custom Parser as follows. Since adding Seconds is the most common modification to the standard cron spec, cron provides a builtin function to do that, which is equivalent to the custom parser you saw earlier, except that its seconds field is REQUIRED: That emulates Quartz, the most popular alternative Cron schedule format: http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/crontrigger.html Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates after 1 hour, 30 minutes, 10 seconds, and then every interval after that. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. By default, all interpretation and scheduling is done in the machine's local time zone (time.Local). You can specify a different time zone on construction: Individual cron schedules may also override the time zone they are to be interpreted in by providing an additional space-separated field at the beginning of the cron spec, of the form "CRON_TZ=Asia/Tokyo". For example: The prefix "TZ=(TIME ZONE)" is also supported for legacy compatibility. Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! A Cron runner may be configured with a chain of job wrappers to add cross-cutting functionality to all submitted jobs. For example, they may be used to achieve the following effects: Install wrappers for all jobs added to a cron using the `cron.WithChain` option: Install wrappers for individual jobs by explicitly wrapping them: Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron defines a Logger interface that is a subset of the one defined in github.com/go-logr/logr. It has two logging levels (Info and Error), and parameters are key/value pairs. This makes it possible for cron logging to plug into structured logging systems. An adapter, [Verbose]PrintfLogger, is provided to wrap the standard library *log.Logger. For additional insight into Cron operations, verbose logging may be activated which will record job runs, scheduling decisions, and added or removed jobs. Activate it with a one-off logger as follows: Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates after 1 hour, 30 minutes, 10 seconds, and then every interval after that. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. All interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Package fiber is an Express inspired web framework built on top of Fasthttp, the fastest HTTP engine for Go. Designed to ease things up for fast development with zero memory allocation and performance in mind.
Package cli provides a minimal framework for creating and organizing command line Go applications. cli is designed to be easy to understand and write, the most simple cli application can be written as follows: Of course this application does not do much, so let's make this an actual application:
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package badger implements an embeddable, simple and fast key-value database, written in pure Go. It is designed to be highly performant for both reads and writes simultaneously. Badger uses Multi-Version Concurrency Control (MVCC), and supports transactions. It runs transactions concurrently, with serializable snapshot isolation guarantees. Badger uses an LSM tree along with a value log to separate keys from values, hence reducing both write amplification and the size of the LSM tree. This allows LSM tree to be served entirely from RAM, while the values are served from SSD. Badger has the following main types: DB, Txn, Item and Iterator. DB contains keys that are associated with values. It must be opened with the appropriate options before it can be accessed. All operations happen inside a Txn. Txn represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key (which are returned inside an Item), or iterate over a set of key-value pairs using an Iterator (which are returned as Item type values as well). Read-write transactions can also update and delete keys from the DB. See the examples for more usage details.
Package badger implements an embeddable, simple and fast key-value database, written in pure Go. It is designed to be highly performant for both reads and writes simultaneously. Badger uses Multi-Version Concurrency Control (MVCC), and supports transactions. It runs transactions concurrently, with serializable snapshot isolation guarantees. Badger uses an LSM tree along with a value log to separate keys from values, hence reducing both write amplification and the size of the LSM tree. This allows LSM tree to be served entirely from RAM, while the values are served from SSD. Badger has the following main types: DB, Txn, Item and Iterator. DB contains keys that are associated with values. It must be opened with the appropriate options before it can be accessed. All operations happen inside a Txn. Txn represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key (which are returned inside an Item), or iterate over a set of key-value pairs using an Iterator (which are returned as Item type values as well). Read-write transactions can also update and delete keys from the DB. See the examples for more usage details.
Package badger implements an embeddable, simple and fast key-value database, written in pure Go. It is designed to be highly performant for both reads and writes simultaneously. Badger uses Multi-Version Concurrency Control (MVCC), and supports transactions. It runs transactions concurrently, with serializable snapshot isolation guarantees. Badger uses an LSM tree along with a value log to separate keys from values, hence reducing both write amplification and the size of the LSM tree. This allows LSM tree to be served entirely from RAM, while the values are served from SSD. Badger has the following main types: DB, Txn, Item and Iterator. DB contains keys that are associated with values. It must be opened with the appropriate options before it can be accessed. All operations happen inside a Txn. Txn represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key (which are returned inside an Item), or iterate over a set of key-value pairs using an Iterator (which are returned as Item type values as well). Read-write transactions can also update and delete keys from the DB. See the examples for more usage details.
Package trace provides an implementation of the tracing part of the OpenTelemetry API. To participate in distributed traces a Span needs to be created for the operation being performed as part of a traced workflow. In its simplest form: A Tracer is unique to the instrumentation and is used to create Spans. Instrumentation should be designed to accept a TracerProvider from which it can create its own unique Tracer. Alternatively, the registered global TracerProvider from the go.opentelemetry.io/otel package can be used as a default. This package does not conform to the standard Go versioning policy; all of its interfaces may have methods added to them without a package major version bump. This non-standard API evolution could surprise an uninformed implementation author. They could unknowingly build their implementation in a way that would result in a runtime panic for their users that update to the new API. The API is designed to help inform an instrumentation author about this non-standard API evolution. It requires them to choose a default behavior for unimplemented interface methods. There are three behavior choices they can make: All interfaces in this API embed a corresponding interface from go.opentelemetry.io/otel/trace/embedded. If an author wants the default behavior of their implementations to be a compilation failure, signaling to their users they need to update to the latest version of that implementation, they need to embed the corresponding interface from go.opentelemetry.io/otel/trace/embedded in their implementation. For example, If an author wants the default behavior of their implementations to panic, they can embed the API interface directly. This option is not recommended. It will lead to publishing packages that contain runtime panics when users update to newer versions of go.opentelemetry.io/otel/trace, which may be done with a transitive dependency. Finally, an author can embed another implementation in theirs. The embedded implementation will be used for methods not defined by the author. For example, an author who wants to default to silently dropping the call can use go.opentelemetry.io/otel/trace/noop: It is strongly recommended that authors only embed go.opentelemetry.io/otel/trace/noop if they choose this default behavior. That implementation is the only one OpenTelemetry authors can guarantee will fully implement all the API interfaces when a user updates their API.
Package metric provides the OpenTelemetry API used to measure metrics about source code operation. This API is separate from its implementation so the instrumentation built from it is reusable. See go.opentelemetry.io/otel/sdk/metric for the official OpenTelemetry implementation of this API. All measurements made with this package are made via instruments. These instruments are created by a Meter which itself is created by a MeterProvider. Applications need to accept a MeterProvider implementation as a starting point when instrumenting. This can be done directly, or by using the OpenTelemetry global MeterProvider via GetMeterProvider. Using an appropriately named Meter from the accepted MeterProvider, instrumentation can then be built from the Meter's instruments. Each instrument is designed to make measurements of a particular type. Broadly, all instruments fall into two overlapping logical categories: asynchronous or synchronous, and int64 or float64. All synchronous instruments (Int64Counter, Int64UpDownCounter, Int64Histogram, Float64Counter, Float64UpDownCounter, and Float64Histogram) are used to measure the operation and performance of source code during the source code execution. These instruments only make measurements when the source code they instrument is run. All asynchronous instruments (Int64ObservableCounter, Int64ObservableUpDownCounter, Int64ObservableGauge, Float64ObservableCounter, Float64ObservableUpDownCounter, and Float64ObservableGauge) are used to measure metrics outside of the execution of source code. They are said to make "observations" via a callback function called once every measurement collection cycle. Each instrument is also grouped by the value type it measures. Either int64 or float64. The value being measured will dictate which instrument in these categories to use. Outside of these two broad categories, instruments are described by the function they are designed to serve. All Counters (Int64Counter, Float64Counter, Int64ObservableCounter, and Float64ObservableCounter) are designed to measure values that never decrease in value, but instead only incrementally increase in value. UpDownCounters (Int64UpDownCounter, Float64UpDownCounter, Int64ObservableUpDownCounter, and Float64ObservableUpDownCounter) on the other hand, are designed to measure values that can increase and decrease. When more information needs to be conveyed about all the synchronous measurements made during a collection cycle, a Histogram (Int64Histogram and Float64Histogram) should be used. Finally, when just the most recent measurement needs to be conveyed about an asynchronous measurement, a Gauge (Int64ObservableGauge and Float64ObservableGauge) should be used. See the OpenTelemetry documentation for more information about instruments and their intended use. OpenTelemetry defines an instrument name syntax that restricts what instrument names are allowed. Instrument names should ... To ensure compatibility with observability platforms, all instruments created need to conform to this syntax. Not all implementations of the API will validate these names, it is the callers responsibility to ensure compliance. Measurements are made by recording values and information about the values with an instrument. How these measurements are recorded depends on the instrument. Measurements for synchronous instruments (Int64Counter, Int64UpDownCounter, Int64Histogram, Float64Counter, Float64UpDownCounter, and Float64Histogram) are recorded using the instrument methods directly. All counter instruments have an Add method that is used to measure an increment value, and all histogram instruments have a Record method to measure a data point. Asynchronous instruments (Int64ObservableCounter, Int64ObservableUpDownCounter, Int64ObservableGauge, Float64ObservableCounter, Float64ObservableUpDownCounter, and Float64ObservableGauge) record measurements within a callback function. The callback is registered with the Meter which ensures the callback is called once per collection cycle. A callback can be registered two ways: during the instrument's creation using an option, or later using the RegisterCallback method of the Meter that created the instrument. If the following criteria are met, an option (WithInt64Callback or WithFloat64Callback) can be used during the asynchronous instrument's creation to register a callback (Int64Callback or Float64Callback, respectively): If the criteria are not met, use the RegisterCallback method of the Meter that created the instrument to register a Callback. This package does not conform to the standard Go versioning policy, all of its interfaces may have methods added to them without a package major version bump. This non-standard API evolution could surprise an uninformed implementation author. They could unknowingly build their implementation in a way that would result in a runtime panic for their users that update to the new API. The API is designed to help inform an instrumentation author about this non-standard API evolution. It requires them to choose a default behavior for unimplemented interface methods. There are three behavior choices they can make: All interfaces in this API embed a corresponding interface from go.opentelemetry.io/otel/metric/embedded. If an author wants the default behavior of their implementations to be a compilation failure, signaling to their users they need to update to the latest version of that implementation, they need to embed the corresponding interface from go.opentelemetry.io/otel/metric/embedded in their implementation. For example, If an author wants the default behavior of their implementations to a panic, they need to embed the API interface directly. This is not a recommended behavior as it could lead to publishing packages that contain runtime panics when users update other package that use newer versions of go.opentelemetry.io/otel/metric. Finally, an author can embed another implementation in theirs. The embedded implementation will be used for methods not defined by the author. For example, an author who wants to default to silently dropping the call can use go.opentelemetry.io/otel/metric/noop: It is strongly recommended that authors only embed go.opentelemetry.io/otel/metric/noop if they choose this default behavior. That implementation is the only one OpenTelemetry authors can guarantee will fully implement all the API interfaces when a user updates their API.
Package cli provides a minimal framework for creating and organizing command line Go applications. cli is designed to be easy to understand and write, the most simple cli application can be written as follows: Of course this application does not do much, so let's make this an actual application:
`grpc_middleware` is a collection of gRPC middleware packages: interceptors, helpers and tools. gRPC is a fantastic RPC middleware, which sees a lot of adoption in the Golang world. However, the upstream gRPC codebase is relatively bare bones. This package, and most of its child packages provides commonly needed middleware for gRPC: client-side interceptors for retires, server-side interceptors for input validation and auth, functions for chaining said interceptors, metadata convenience methods and more. By default, gRPC doesn't allow one to have more than one interceptor either on the client nor on the server side. `grpc_middleware` provides convenient chaining methods Simple way of turning a multiple interceptors into a single interceptor. Here's an example for server chaining: These interceptors will be executed from left to right: logging, monitoring and auth. Here's an example for client side chaining: These interceptors will be executed from left to right: monitoring and then retry logic. The retry interceptor will call every interceptor that follows it whenever when a retry happens. Implementing your own interceptor is pretty trivial: there are interfaces for that. But the interesting bit exposing common data to handlers (and other middleware), similarly to HTTP Middleware design. For example, you may want to pass the identity of the caller from the auth interceptor all the way to the handling function. For example, a client side interceptor example for auth looks like: Unfortunately, it's not as easy for streaming RPCs. These have the `context.Context` embedded within the `grpc.ServerStream` object. To pass values through context, a wrapper (`WrappedServerStream`) is needed. For example:
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. Why not a better error message? Because this library intends for you to handle your own error messages. Why should I handle my own errors? Many reasons. We built an internationalized application and needed to know the field, and what validation failed so we could provide a localized error. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only returns nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the default separator of validation tags. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is usefull if inside of you program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Is a special tag without a validation function attached. It is used when a field is a Pointer, Interface or Invalid and you wish to validate that it exists. Example: want to ensure a bool exists if you define the bool as a pointer and use exists it will ensure there is a value; couldn't use required as it would fail when the bool was false. exists will fail is the value is a Pointer, Interface or Invalid and is nil. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For numbers, max will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This validates that a string value contains alpha characters only This validates that a string value contains alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all posibilities. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. This validates that a string value contains a valid version 3 UUID. This validates that a string value contains a valid version 4 UUID. This validates that a string value contains a valid version 5 UUID. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Adress. This validates that a string value contains a valid v4 IP Adress. This validates that a string value contains a valid v6 IP Adress. This validates that a string value contains a valid CIDR Adress. This validates that a string value contains a valid v4 CIDR Adress. This validates that a string value contains a valid v6 CIDR Adress. This validates that a string value contains a valid resolvable TCP Adress. This validates that a string value contains a valid resolvable v4 TCP Adress. This validates that a string value contains a valid resolvable v6 TCP Adress. This validates that a string value contains a valid resolvable UDP Adress. This validates that a string value contains a valid resolvable v4 UDP Adress. This validates that a string value contains a valid resolvable v6 UDP Adress. This validates that a string value contains a valid resolvable IP Adress. This validates that a string value contains a valid resolvable v4 IP Adress. This validates that a string value contains a valid resolvable v6 IP Adress. This validates that a string value contains a valid Unix Adress. This validates that a string value contains a valid MAC Adress. Note: See Go's ParseMAC for accepted formats and types: NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Gonum is a set of packages designed to make writing numerical and scientific algorithms productive, performant, and scalable. Gonum contains libraries for matrices and linear algebra; statistics, probability distributions, and sampling; tools for function differentiation, integration, and optimization; network creation and analysis; and more.
package bbolt implements a low-level key/value store in pure Go. It supports fully serializable transactions, ACID semantics, and lock-free MVCC with multiple readers and a single writer. Bolt can be used for projects that want a simple data store without the need to add large dependencies such as Postgres or MySQL. Bolt is a single-level, zero-copy, B+tree data store. This means that Bolt is optimized for fast read access and does not require recovery in the event of a system crash. Transactions which have not finished committing will simply be rolled back in the event of a crash. The design of Bolt is based on Howard Chu's LMDB database project. Bolt currently works on Windows, Mac OS X, and Linux. There are only a few types in Bolt: DB, Bucket, Tx, and Cursor. The DB is a collection of buckets and is represented by a single file on disk. A bucket is a collection of unique keys that are associated with values. Transactions provide either read-only or read-write access to the database. Read-only transactions can retrieve key/value pairs and can use Cursors to iterate over the dataset sequentially. Read-write transactions can create and delete buckets and can insert and remove keys. Only one read-write transaction is allowed at a time. The database uses a read-only, memory-mapped data file to ensure that applications cannot corrupt the database, however, this means that keys and values returned from Bolt cannot be changed. Writing to a read-only byte slice will cause Go to panic. Keys and values retrieved from the database are only valid for the life of the transaction. When used outside the transaction, these byte slices can point to different data or can point to invalid memory which will cause a panic.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package btcec implements support for the elliptic curves needed for bitcoin. Bitcoin uses elliptic curve cryptography using koblitz curves (specifically secp256k1) for cryptographic functions. See http://www.secg.org/collateral/sec2_final.pdf for details on the standard. This package provides the data structures and functions implementing the crypto/elliptic Curve interface in order to permit using these curves with the standard crypto/ecdsa package provided with go. Helper functionality is provided to parse signatures and public keys from standard formats. It was designed for use with btcd, but should be general enough for other uses of elliptic curve crypto. It was originally based on some initial work by ThePiachu, but has significantly diverged since then.
A highly extensible git implementation in pure Go. go-git aims to reach the completeness of libgit2 or jgit, nowadays covers the majority of the plumbing read operations and some of the main write operations, but lacks the main porcelain operations such as merges. It is highly extensible, we have been following the open/close principle in its design to facilitate extensions, mainly focusing the efforts on the persistence of the objects.
A highly extensible git implementation in pure Go. go-git aims to reach the completeness of libgit2 or jgit, nowadays covers the majority of the plumbing read operations and some of the main write operations, but lacks the main porcelain operations such as merges. It is highly extensible, we have been following the open/close principle in its design to facilitate extensions, mainly focusing the efforts on the persistence of the objects.
Ginkgo is a testing framework for Go designed to help you write expressive tests. https://github.com/onsi/ginkgo MIT-Licensed The godoc documentation outlines Ginkgo's API. Since Ginkgo is a Domain-Specific Language it is important to build a mental model for Ginkgo - the narrative documentation at https://onsi.github.io/ginkgo/ is designed to help you do that. You should start there - even a brief skim will be helpful. At minimum you should skim through the https://onsi.github.io/ginkgo/#getting-started chapter. Ginkgo's is best paired with the Gomega matcher library: https://github.com/onsi/gomega You can run Ginkgo specs with go test - however we recommend using the ginkgo cli. It enables functionality that go test does not (especially running suites in parallel). You can learn more at https://onsi.github.io/ginkgo/#ginkgo-cli-overview or by running 'ginkgo help'.
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/main/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies, like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/event) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://pkg.go.dev/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. This example creates a simple application Controller that is configured for ExampleCRDWithConfigMapRef CRD. Any change in the configMap referenced in this Custom Resource will cause the re-reconcile of the parent ExampleCRDWithConfigMapRef due to the implementation of the .Watches method of "sigs.k8s.io/controller-runtime/pkg/builder".Builder. This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/component-base/blob/master/config/v1alpha1/defaults.go * defaultLeaseDuration = 15 * time.Second * defaultRenewDeadline = 10 * time.Second * defaultRetryPeriod = 2 * time.Second * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application.
Package amqp is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the sychronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.Publish or Channel.Consume. SSL/TLS - Secure connections When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate dial your own connection. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html This exports a Session object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and `rabbitmq-server` in another. Stop & restart RabbitMQ to see how the queue reacts.
Package ebpf is a toolkit for working with eBPF programs. eBPF programs are small snippets of code which are executed directly in a VM in the Linux kernel, which makes them very fast and flexible. Many Linux subsystems now accept eBPF programs. This makes it possible to implement highly application specific logic inside the kernel, without having to modify the actual kernel itself. This package is designed for long-running processes which want to use eBPF to implement part of their application logic. It has no run-time dependencies outside of the library and the Linux kernel itself. eBPF code should be compiled ahead of time using clang, and shipped with your application as any other resource. Use the link subpackage to attach a loaded program to a hook in the kernel. Note that losing all references to Map and Program resources will cause their underlying file descriptors to be closed, potentially removing those objects from the kernel. Always retain a reference by e.g. deferring a Close() of a Collection or LoadAndAssign object until application exit. Special care needs to be taken when handling maps of type ProgramArray, as the kernel erases its contents when the last userspace or bpffs reference disappears, regardless of the map being in active use. ExampleMarshaler shows how to use custom encoding with map methods. ExampleExtractDistance shows how to attach an eBPF socket filter to extract the network distance of an IP host. ExampleSocketELF demonstrates how to load an eBPF program from an ELF, and attach it to a raw socket.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values. There are two implementations; those suffixed with 'G' are generics, usable for any type, and require a passed-in "less" function to define their ordering. Those without this prefix are specific to the 'Item' interface, and use its 'Less' function for ordering.
Package errors implements functions to manipulate errors. This package implements the Go 2 draft designs for error inspection and printing: This is an EXPERIMENTAL package, and may change in arbitrary ways without notice.