Package bolt implements a low-level key/value store in pure Go. It supports fully serializable transactions, ACID semantics, and lock-free MVCC with multiple readers and a single writer. Bolt can be used for projects that want a simple data store without the need to add large dependencies such as Postgres or MySQL. Bolt is a single-level, zero-copy, B+tree data store. This means that Bolt is optimized for fast read access and does not require recovery in the event of a system crash. Transactions which have not finished committing will simply be rolled back in the event of a crash. The design of Bolt is based on Howard Chu's LMDB database project. Bolt currently works on Windows, Mac OS X, and Linux. There are only a few types in Bolt: DB, Bucket, Tx, and Cursor. The DB is a collection of buckets and is represented by a single file on disk. A bucket is a collection of unique keys that are associated with values. Transactions provide either read-only or read-write access to the database. Read-only transactions can retrieve key/value pairs and can use Cursors to iterate over the dataset sequentially. Read-write transactions can create and delete buckets and can insert and remove keys. Only one read-write transaction is allowed at a time. The database uses a read-only, memory-mapped data file to ensure that applications cannot corrupt the database, however, this means that keys and values returned from Bolt cannot be changed. Writing to a read-only byte slice will cause Go to panic. Keys and values retrieved from the database are only valid for the life of the transaction. When used outside the transaction, these byte slices can point to different data or can point to invalid memory which will cause a panic.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/master/_examples Validator is designed to be thread-safe and used as a singleton instance. It caches information about your struct and validations, in essence only parsing your validation tags once per struct type. Using multiple instances neglects the benefit of caching. The not thread-safe functions are explicitly marked as such in the documentation. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rgb|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. Allows to skip the validation if the value is nil (same as omitempty, but only for the nil-values). This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For booleans ensures value is not false. For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value when using WithRequiredStructEnabled. The field under validation must be present and not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must not be present or not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must not be present or empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, len will ensure that the value is equal to the duration given in the parameter. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, max will ensure that the value is less than or equal to the duration given in the parameter. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, min will ensure that the value is greater than or equal to the duration given in the parameter. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, eq will ensure that the value is equal to the duration given in the parameter. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, ne will ensure that the value is not equal to the duration given in the parameter. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. To match strings with spaces in them, include the target string between single quotes. Kind of like an 'enum'. Works the same as oneof but is case insensitive and therefore only accepts strings. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gt will ensure that the value is greater than the duration given in the parameter. Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gte will ensure that the value is greater than or equal to the duration given in the parameter. For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lt will ensure that the value is less than the duration given in the parameter. Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lte will ensure that the value is less than or equal to the duration given in the parameter. This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value can successfully be parsed into a boolean with strconv.ParseBool This validates that a string value contains number values only. For integers or float it returns true. This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains only lowercase characters. An empty string is not a valid lowercase string. This validates that a string value contains only uppercase characters. An empty string is not a valid uppercase string. This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid E.164 Phone number https://en.wikipedia.org/wiki/E.164 (ex. +1123456789) This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value is valid JSON This validates that a string value is a valid JWT This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid file path and that the file exists on the machine and is an image. This is done using os.Stat and github.com/gabriel-vasile/mimetype This validates that a string value contains a valid file path but does not validate the existence of that file. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid bas324 value. Although an empty string is valid base32 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value, but without = padding, according the RFC4648 spec, section 3.2. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providing reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value does not start with the supplied string value This validates that a string value does not end with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains a valid ULID value. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid directory but does not validate the existence of that directory. This is done using os.Stat, which is a platform independent function. It is safest to suffix the string with os.PathSeparator if the directory may not exist at the time of validation. This validates that a string value contains a valid DNS hostname and port that can be used to validate fields typically passed to sockets and connections. This validates that a string value is a valid datetime based on the supplied datetime format. Supplied format must match the official Go time format layout as documented in https://golang.org/pkg/time/ This validates that a string value is a valid country code based on iso3166-1 alpha-2 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-3 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-numeric standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid BCP 47 language tag, as parsed by language.Parse. More information on https://pkg.go.dev/golang.org/x/text/language BIC (SWIFT code) This validates that a string value is a valid Business Identifier Code (SWIFT code), defined in ISO 9362. More information on https://www.iso.org/standard/60390.html This validates that a string value is a valid dns RFC 1035 label, defined in RFC 1035. More information on https://datatracker.ietf.org/doc/html/rfc1035 This validates that a string value is a valid time zone based on the time zone database present on the system. Although empty value and Local value are allowed by time.LoadLocation golang function, they are not allowed by this validator. More information on https://golang.org/pkg/time/#LoadLocation This validates that a string value is a valid semver version, defined in Semantic Versioning 2.0.0. More information on https://semver.org/ This validates that a string value is a valid cve id, defined in cve mitre. More information on https://cve.mitre.org/ This validates that a string value contains a valid credit card number using Luhn algorithm. This validates that a string or (u)int value contains a valid checksum using the Luhn algorithm. This validates that a string is a valid 24 character hexadecimal string or valid connection string. Example: This validates that a string value contains a valid cron expression. This validates that a string is valid for use with SpiceDb for the indicated purpose. If no purpose is given, a purpose of 'id' is assumed. Alias Validators and Tags NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package cron implements a cron spec parser and job runner. To download the specific tagged release, run: Import it in your program as: It requires Go 1.11 or later due to usage of Go Modules. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 5 space-separated fields. Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. The specific interpretation of the format is based on the Cron Wikipedia page: https://en.wikipedia.org/wiki/Cron Alternative Cron expression formats support other fields like seconds. You can implement that by creating a custom Parser as follows. Since adding Seconds is the most common modification to the standard cron spec, cron provides a builtin function to do that, which is equivalent to the custom parser you saw earlier, except that its seconds field is REQUIRED: That emulates Quartz, the most popular alternative Cron schedule format: http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/crontrigger.html Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates after 1 hour, 30 minutes, 10 seconds, and then every interval after that. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. By default, all interpretation and scheduling is done in the machine's local time zone (time.Local). You can specify a different time zone on construction: Individual cron schedules may also override the time zone they are to be interpreted in by providing an additional space-separated field at the beginning of the cron spec, of the form "CRON_TZ=Asia/Tokyo". For example: The prefix "TZ=(TIME ZONE)" is also supported for legacy compatibility. Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! A Cron runner may be configured with a chain of job wrappers to add cross-cutting functionality to all submitted jobs. For example, they may be used to achieve the following effects: Install wrappers for all jobs added to a cron using the `cron.WithChain` option: Install wrappers for individual jobs by explicitly wrapping them: Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron defines a Logger interface that is a subset of the one defined in github.com/go-logr/logr. It has two logging levels (Info and Error), and parameters are key/value pairs. This makes it possible for cron logging to plug into structured logging systems. An adapter, [Verbose]PrintfLogger, is provided to wrap the standard library *log.Logger. For additional insight into Cron operations, verbose logging may be activated which will record job runs, scheduling decisions, and added or removed jobs. Activate it with a one-off logger as follows: Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
package bbolt implements a low-level key/value store in pure Go. It supports fully serializable transactions, ACID semantics, and lock-free MVCC with multiple readers and a single writer. Bolt can be used for projects that want a simple data store without the need to add large dependencies such as Postgres or MySQL. Bolt is a single-level, zero-copy, B+tree data store. This means that Bolt is optimized for fast read access and does not require recovery in the event of a system crash. Transactions which have not finished committing will simply be rolled back in the event of a crash. The design of Bolt is based on Howard Chu's LMDB database project. Bolt currently works on Windows, Mac OS X, and Linux. There are only a few types in Bolt: DB, Bucket, Tx, and Cursor. The DB is a collection of buckets and is represented by a single file on disk. A bucket is a collection of unique keys that are associated with values. Transactions provide either read-only or read-write access to the database. Read-only transactions can retrieve key/value pairs and can use Cursors to iterate over the dataset sequentially. Read-write transactions can create and delete buckets and can insert and remove keys. Only one read-write transaction is allowed at a time. The database uses a read-only, memory-mapped data file to ensure that applications cannot corrupt the database, however, this means that keys and values returned from Bolt cannot be changed. Writing to a read-only byte slice will cause Go to panic. Keys and values retrieved from the database are only valid for the life of the transaction. When used outside the transaction, these byte slices can point to different data or can point to invalid memory which will cause a panic.
Package amqp is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the sychronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.Publish or Channel.Consume. SSL/TLS - Secure connections When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate dial your own connection. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html This exports a Session object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and `rabbitmq-server` in another. Stop & restart RabbitMQ to see how the queue reacts.
package bbolt implements a low-level key/value store in pure Go. It supports fully serializable transactions, ACID semantics, and lock-free MVCC with multiple readers and a single writer. Bolt can be used for projects that want a simple data store without the need to add large dependencies such as Postgres or MySQL. Bolt is a single-level, zero-copy, B+tree data store. This means that Bolt is optimized for fast read access and does not require recovery in the event of a system crash. Transactions which have not finished committing will simply be rolled back in the event of a crash. The design of Bolt is based on Howard Chu's LMDB database project. Bolt currently works on Windows, Mac OS X, and Linux. There are only a few types in Bolt: DB, Bucket, Tx, and Cursor. The DB is a collection of buckets and is represented by a single file on disk. A bucket is a collection of unique keys that are associated with values. Transactions provide either read-only or read-write access to the database. Read-only transactions can retrieve key/value pairs and can use Cursors to iterate over the dataset sequentially. Read-write transactions can create and delete buckets and can insert and remove keys. Only one read-write transaction is allowed at a time. The database uses a read-only, memory-mapped data file to ensure that applications cannot corrupt the database, however, this means that keys and values returned from Bolt cannot be changed. Writing to a read-only byte slice will cause Go to panic. Keys and values retrieved from the database are only valid for the life of the transaction. When used outside the transaction, these byte slices can point to different data or can point to invalid memory which will cause a panic.
package bbolt implements a low-level key/value store in pure Go. It supports fully serializable transactions, ACID semantics, and lock-free MVCC with multiple readers and a single writer. Bolt can be used for projects that want a simple data store without the need to add large dependencies such as Postgres or MySQL. Bolt is a single-level, zero-copy, B+tree data store. This means that Bolt is optimized for fast read access and does not require recovery in the event of a system crash. Transactions which have not finished committing will simply be rolled back in the event of a crash. The design of Bolt is based on Howard Chu's LMDB database project. Bolt currently works on Windows, Mac OS X, and Linux. There are only a few types in Bolt: DB, Bucket, Tx, and Cursor. The DB is a collection of buckets and is represented by a single file on disk. A bucket is a collection of unique keys that are associated with values. Transactions provide either read-only or read-write access to the database. Read-only transactions can retrieve key/value pairs and can use Cursors to iterate over the dataset sequentially. Read-write transactions can create and delete buckets and can insert and remove keys. Only one read-write transaction is allowed at a time. The database uses a read-only, memory-mapped data file to ensure that applications cannot corrupt the database, however, this means that keys and values returned from Bolt cannot be changed. Writing to a read-only byte slice will cause Go to panic. Keys and values retrieved from the database are only valid for the life of the transaction. When used outside the transaction, these byte slices can point to different data or can point to invalid memory which will cause a panic.
Package buffalo is a Go web development eco-system, designed to make your life easier. Buffalo helps you to generate a web project that already has everything from front-end (JavaScript, SCSS, etc.) to back-end (database, routing, etc.) already hooked up and ready to run. From there it provides easy APIs to build your web application quickly in Go. Buffalo **isn't just a framework**, it's a holistic web development environment and project structure that **lets developers get straight to the business** of, well, building their business.
Package log implements logging for the datadog agent. It wraps seelog, and supports logging to multiple destinations, buffering messages logged before setup, and scrubbing secrets from log messages. This module is exported and can be used outside of the datadog-agent repository, but is not designed as a general-purpose logging system. Its API may change incompatibly.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package errors provides an easy way to annotate errors without losing the original error context. The exported `New` and `Errorf` functions are designed to replace the `errors.New` and `fmt.Errorf` functions respectively. The same underlying error is there, but the package also records the location at which the error was created. A primary use case for this library is to add extra context any time an error is returned from a function. This instead becomes: which just records the file and line number of the Trace call, or which also adds an annotation to the error. When you want to check to see if an error is of a particular type, a helper function is normally exported by the package that returned the error, like the `os` package does. The underlying cause of the error is available using the `Cause` function. The result of the `Error()` call on an annotated error is the annotations joined with colons, then the result of the `Error()` method for the underlying error that was the cause. Obviously recording the file, line and functions is not very useful if you cannot get them back out again. will return something like: The first error was generated by an external system, so there was no location associated. The second, fourth, and last lines were generated with Trace calls, and the other two through Annotate. Sometimes when responding to an error you want to return a more specific error for the situation. This returns an error where the complete error stack is still available, and `errors.Cause()` will return the `NotFound` error.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package amqp091 is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the synchronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.PublishWithContext or Channel.Consume. When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html In order to be notified when a connection or channel gets closed, both structures offer the possibility to register channels using Channel.NotifyClose and Connection.NotifyClose functions: No errors will be sent in case of a graceful connection close. In case of a non-graceful closure due to e.g. network issue, or forced connection closure from the Management UI, the error will be notified synchronously by the library. The library sends to notification channels just once. After sending a notification to all channels, the library closes all registered notification channels. After receiving a notification, the application should create and register a new channel. To avoid deadlocks in the library, it is necessary to consume from the channels. This could be done inside a different goroutine with a select listening on the two channels inside a for loop like: It is strongly recommended to use buffered channels to avoid deadlocks inside the library. Using Channel.NotifyPublish allows the caller of the library to be notified, through a go channel, when a message has been received and confirmed by the broker. It's advisable to wait for all Confirmations to arrive before calling Channel.Close or Connection.Close. It is also necessary to consume from this channel until it gets closed. The library sends synchronously to the registered channel. It is advisable to use a buffered channel, with capacity set to the maximum acceptable number of unconfirmed messages. It is important to consume from the confirmation channel at all times, in order to avoid deadlocks in the library. This exports a Client object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and rabbitmq-server in another. Stop & restart RabbitMQ to see how the queue reacts.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This library is offered as-is, and without a guarantee. It will need an independent security review before it should be considered ready for use in security-critical applications. If you integrate Kyber into your application it is YOUR RESPONSIBILITY to arrange for that audit. If you notice a possible security problem, please report it to dedis-security@epfl.ch.
Package mempool provides a policy-enforced pool of unmined Decred transactions. A key responsibility of the Decred network is mining transactions – regular transactions and stake transactions – into blocks. In order to facilitate this, the mining process relies on having a readily-available source of transactions to include in a block that is being solved. At a high level, this package satisfies that requirement by providing an in-memory pool of fully validated transactions that can also optionally be further filtered based upon a configurable policy. The Policy configuration options has flags that control whether or not "standard" transactions and old votes are accepted into the mempool. In essence, a "standard" transaction is one that satisfies a fairly strict set of requirements that are largely intended to help provide fair use of the system to all users. It is important to note that what is considered to be a "standard" transaction changes over time as policy and consensus rules evolve. For some insight, at the time of this writing, an example of _some_ of the criteria that are required for a transaction to be considered standard are that it is of the most-recently supported version, finalized, does not exceed a specific size, and only consists of specific script forms. Since this package does not deal with other Decred specifics such as network communication and transaction relay, it returns a list of transactions that were accepted which gives the caller a high level of flexibility in how they want to proceed. Typically, this will involve things such as relaying the transactions to other peers on the network and notifying the mining process that new transactions are available. This package has intentionally been designed so it can be used as a standalone package for any projects needing the ability create an in-memory pool of Decred transactions that are not only valid by consensus rules, but also adhere to a configurable policy ## Feature Overview The following is a quick overview of the major features. It is not intended to be an exhaustive list. - Maintain a pool of fully validated transactions - Stake transaction support (ticket purchases, votes and revocations) - Orphan transaction support (transactions that spend from unknown outputs) - Configurable transaction acceptance policy - Additional metadata tracking for each transaction - Manual control of transaction removal Errors returned by this package are either the raw errors provided by underlying calls or of type mempool.RuleError. Since there are two classes of rules (mempool acceptance rules and blockchain (consensus) acceptance rules), the mempool.RuleError type contains a single Err field which will, in turn, either be a mempool.TxRuleError or a blockchain.RuleError. The first indicates a violation of mempool acceptance rules while the latter indicates a violation of consensus acceptance rules. This allows the caller to easily differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by type asserting the Err field to one of the aforementioned types and examining their underlying ErrorCode field.
Package spf implements SPF (Sender Policy Framework) lookup and validation. Sender Policy Framework (SPF) is a simple email-validation system designed to detect email spoofing by providing a mechanism to allow receiving mail exchangers to check that incoming mail from a domain comes from a host authorized by that domain's administrators [Wikipedia]. This package is intended to be used by SMTP servers to implement SPF validation. All mechanisms and modifiers are supported: References:
Package gnark provides fast Zero Knowledge Proofs (ZKP) systems and a high level APIs to design ZKP circuits. gnark supports the following ZKP schemes: gnark supports the following curves: User documentation https://docs.gnark.consensys.net
Package digest provides a generalized type to opaquely represent message digests and their operations within the registry. The Digest type is designed to serve as a flexible identifier in a content-addressable system. More importantly, it provides tools and wrappers to work with hash.Hash-based digests with little effort. The format of a digest is simply a string with two parts, dubbed the "algorithm" and the "digest", separated by a colon: An example of a sha256 digest representation follows: The "algorithm" portion defines both the hashing algorithm used to calculate the digest and the encoding of the resulting digest, which defaults to "hex" if not otherwise specified. Currently, all supported algorithms have their digests encoded in hex strings. In the example above, the string "sha256" is the algorithm and the hex bytes are the "digest". Because the Digest type is simply a string, once a valid Digest is obtained, comparisons are cheap, quick and simple to express with the standard equality operator. The main benefit of using the Digest type is simple verification against a given digest. The Verifier interface, modeled after the stdlib hash.Hash interface, provides a common write sink for digest verification. After writing is complete, calling the Verifier.Verified method will indicate whether or not the stream of bytes matches the target digest. In addition to the above, we intend to add the following features to this package: 1. A Digester type that supports write sink digest calculation. 2. Suspend and resume of ongoing digest calculations to support efficient digest verification in the registry.
nxadm/tail provides a Go library that emulates the features of the BSD `tail` program. The library comes with full support for truncation/move detection as it is designed to work with log rotation tools. The library works on all operating systems supported by Go, including POSIX systems like Linux and *BSD, and MS Windows. Go 1.9 is the oldest compiler release supported.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package pointer implements Andersen's analysis, an inclusion-based pointer analysis algorithm first described in (Andersen, 1994). A pointer analysis relates every pointer expression in a whole program to the set of memory locations to which it might point. This information can be used to construct a call graph of the program that precisely represents the destinations of dynamic function and method calls. It can also be used to determine, for example, which pairs of channel operations operate on the same channel. The package allows the client to request a set of expressions of interest for which the points-to information will be returned once the analysis is complete. In addition, the client may request that a callgraph is constructed. The example program in example_test.go demonstrates both of these features. Clients should not request more information than they need since it may increase the cost of the analysis significantly. Our algorithm is INCLUSION-BASED: the points-to sets for x and y will be related by pts(y) ⊇ pts(x) if the program contains the statement y = x. It is FLOW-INSENSITIVE: it ignores all control flow constructs and the order of statements in a program. It is therefore a "MAY ALIAS" analysis: its facts are of the form "P may/may not point to L", not "P must point to L". It is FIELD-SENSITIVE: it builds separate points-to sets for distinct fields, such as x and y in struct { x, y *int }. It is mostly CONTEXT-INSENSITIVE: most functions are analyzed once, so values can flow in at one call to the function and return out at another. Only some smaller functions are analyzed with consideration of their calling context. It has a CONTEXT-SENSITIVE HEAP: objects are named by both allocation site and context, so the objects returned by two distinct calls to f: are distinguished up to the limits of the calling context. It is a WHOLE PROGRAM analysis: it requires SSA-form IR for the complete Go program and summaries for native code. See the (Hind, PASTE'01) survey paper for an explanation of these terms. The analysis is fully sound when invoked on pure Go programs that do not use reflection or unsafe.Pointer conversions. In other words, if there is any possible execution of the program in which pointer P may point to object O, the analysis will report that fact. By default, the "reflect" library is ignored by the analysis, as if all its functions were no-ops, but if the client enables the Reflection flag, the analysis will make a reasonable attempt to model the effects of calls into this library. However, this comes at a significant performance cost, and not all features of that library are yet implemented. In addition, some simplifying approximations must be made to ensure that the analysis terminates; for example, reflection can be used to construct an infinite set of types and values of those types, but the analysis arbitrarily bounds the depth of such types. Most but not all reflection operations are supported. In particular, addressable reflect.Values are not yet implemented, so operations such as (reflect.Value).Set have no analytic effect. The pointer analysis makes no attempt to understand aliasing between the operand x and result y of an unsafe.Pointer conversion: It is as if the conversion allocated an entirely new object: The analysis cannot model the aliasing effects of functions written in languages other than Go, such as runtime intrinsics in C or assembly, or code accessed via cgo. The result is as if such functions are no-ops. However, various important intrinsics are understood by the analysis, along with built-ins such as append. The analysis currently provides no way for users to specify the aliasing effects of native code. ------------------------------------------------------------------------ The remaining documentation is intended for package maintainers and pointer analysis specialists. Maintainers should have a solid understanding of the referenced papers (especially those by H&L and PKH) before making making significant changes. The implementation is similar to that described in (Pearce et al, PASTE'04). Unlike many algorithms which interleave constraint generation and solving, constructing the callgraph as they go, this implementation for the most part observes a phase ordering (generation before solving), with only simple (copy) constraints being generated during solving. (The exception is reflection, which creates various constraints during solving as new types flow to reflect.Value operations.) This improves the traction of presolver optimisations, but imposes certain restrictions, e.g. potential context sensitivity is limited since all variants must be created a priori. A type is said to be "pointer-like" if it is a reference to an object. Pointer-like types include pointers and also interfaces, maps, channels, functions and slices. We occasionally use C's x->f notation to distinguish the case where x is a struct pointer from x.f where is a struct value. Pointer analysis literature (and our comments) often uses the notation dst=*src+offset to mean something different than what it means in Go. It means: for each node index p in pts(src), the node index p+offset is in pts(dst). Similarly *dst+offset=src is used for store constraints and dst=src+offset for offset-address constraints. Nodes are the key datastructure of the analysis, and have a dual role: they represent both constraint variables (equivalence classes of pointers) and members of points-to sets (things that can be pointed at, i.e. "labels"). Nodes are naturally numbered. The numbering enables compact representations of sets of nodes such as bitvectors (or BDDs); and the ordering enables a very cheap way to group related nodes together. For example, passing n parameters consists of generating n parallel constraints from caller+i to callee+i for 0<=i<n. The zero nodeid means "not a pointer". For simplicity, we generate flow constraints even for non-pointer types such as int. The pointer equivalence (PE) presolver optimization detects which variables cannot point to anything; this includes not only all variables of non-pointer types (such as int) but also variables of pointer-like types if they are always nil, or are parameters to a function that is never called. Each node represents a scalar part of a value or object. Aggregate types (structs, tuples, arrays) are recursively flattened out into a sequential list of scalar component types, and all the elements of an array are represented by a single node. (The flattening of a basic type is a list containing a single node.) Nodes are connected into a graph with various kinds of labelled edges: simple edges (or copy constraints) represent value flow. Complex edges (load, store, etc) trigger the creation of new simple edges during the solving phase. Conceptually, an "object" is a contiguous sequence of nodes denoting an addressable location: something that a pointer can point to. The first node of an object has a non-nil obj field containing information about the allocation: its size, context, and ssa.Value. Objects include: Many objects have no Go types. For example, the func, map and chan type kinds in Go are all varieties of pointers, but their respective objects are actual functions (executable code), maps (hash tables), and channels (synchronized queues). Given the way we model interfaces, they too are pointers to "tagged" objects with no Go type. And an *ssa.Global denotes the address of a global variable, but the object for a Global is the actual data. So, the types of an ssa.Value that creates an object is "off by one indirection": a pointer to the object. The individual nodes of an object are sometimes referred to as "labels". For uniformity, all objects have a non-zero number of fields, even those of the empty type struct{}. (All arrays are treated as if of length 1, so there are no empty arrays. The empty tuple is never address-taken, so is never an object.) An tagged object has the following layout: The T node's typ field is the dynamic type of the "payload": the value v which follows, flattened out. The T node's obj has the otTagged flag. Tagged objects are needed when generalizing across types: interfaces, reflect.Values, reflect.Types. Each of these three types is modelled as a pointer that exclusively points to tagged objects. Tagged objects may be indirect (obj.flags ⊇ {otIndirect}) meaning that the value v is not of type T but *T; this is used only for reflect.Values that represent lvalues. (These are not implemented yet.) Variables of the following "scalar" types may be represented by a single node: basic types, pointers, channels, maps, slices, 'func' pointers, interfaces. Pointers: Nothing to say here, oddly. Basic types (bool, string, numbers, unsafe.Pointer): Currently all fields in the flattening of a type, including non-pointer basic types such as int, are represented in objects and values. Though non-pointer nodes within values are uninteresting, non-pointer nodes in objects may be useful (if address-taken) because they permit the analysis to deduce, in this example, that p points to s.x. If we ignored such object fields, we could only say that p points somewhere within s. All other basic types are ignored. Expressions of these types have zero nodeid, and fields of these types within aggregate other types are omitted. unsafe.Pointers are not modelled as pointers, so a conversion of an unsafe.Pointer to *T is (unsoundly) treated equivalent to new(T). Channels: An expression of type 'chan T' is a kind of pointer that points exclusively to channel objects, i.e. objects created by MakeChan (or reflection). 'chan T' is treated like *T. *ssa.MakeChan is treated as equivalent to new(T). *ssa.Send and receive (*ssa.UnOp(ARROW)) and are equivalent to store Maps: An expression of type 'map[K]V' is a kind of pointer that points exclusively to map objects, i.e. objects created by MakeMap (or reflection). map K[V] is treated like *M where M = struct{k K; v V}. *ssa.MakeMap is equivalent to new(M). *ssa.MapUpdate is equivalent to *y=x where *y and x have type M. *ssa.Lookup is equivalent to y=x.v where x has type *M. Slices: A slice []T, which dynamically resembles a struct{array *T, len, cap int}, is treated as if it were just a *T pointer; the len and cap fields are ignored. *ssa.MakeSlice is treated like new([1]T): an allocation of a *ssa.Index on a slice is equivalent to a load. *ssa.IndexAddr on a slice returns the address of the sole element of the slice, i.e. the same address. *ssa.Slice is treated as a simple copy. Functions: An expression of type 'func...' is a kind of pointer that points exclusively to function objects. A function object has the following layout: There may be multiple function objects for the same *ssa.Function due to context-sensitive treatment of some functions. The first node is the function's identity node. Associated with every callsite is a special "targets" variable, whose pts() contains the identity node of each function to which the call may dispatch. Identity words are not otherwise used during the analysis, but we construct the call graph from the pts() solution for such nodes. The following block of contiguous nodes represents the flattened-out types of the parameters ("P-block") and results ("R-block") of the function object. The treatment of free variables of closures (*ssa.FreeVar) is like that of global variables; it is not context-sensitive. *ssa.MakeClosure instructions create copy edges to Captures. A Go value of type 'func' (i.e. a pointer to one or more functions) is a pointer whose pts() contains function objects. The valueNode() for an *ssa.Function returns a singleton for that function. Interfaces: An expression of type 'interface{...}' is a kind of pointer that points exclusively to tagged objects. All tagged objects pointed to by an interface are direct (the otIndirect flag is clear) and concrete (the tag type T is not itself an interface type). The associated ssa.Value for an interface's tagged objects may be an *ssa.MakeInterface instruction, or nil if the tagged object was created by an instrinsic (e.g. reflection). Constructing an interface value causes generation of constraints for all of the concrete type's methods; we can't tell a priori which ones may be called. TypeAssert y = x.(T) is implemented by a dynamic constraint triggered by each tagged object O added to pts(x): a typeFilter constraint if T is an interface type, or an untag constraint if T is a concrete type. A typeFilter tests whether O.typ implements T; if so, O is added to pts(y). An untagFilter tests whether O.typ is assignable to T,and if so, a copy edge O.v -> y is added. ChangeInterface is a simple copy because the representation of tagged objects is independent of the interface type (in contrast to the "method tables" approach used by the gc runtime). y := Invoke x.m(...) is implemented by allocating contiguous P/R blocks for the callsite and adding a dynamic rule triggered by each tagged object added to pts(x). The rule adds param/results copy edges to/from each discovered concrete method. (Q. Why do we model an interface as a pointer to a pair of type and value, rather than as a pair of a pointer to type and a pointer to value? A. Control-flow joins would merge interfaces ({T1}, {V1}) and ({T2}, {V2}) to make ({T1,T2}, {V1,V2}), leading to the infeasible and type-unsafe combination (T1,V2). Treating the value and its concrete type as inseparable makes the analysis type-safe.) Type parameters: Type parameters are not directly supported by the analysis. Calls to generic functions will be left as if they had empty bodies. Users of the package are expected to use the ssa.InstantiateGenerics builder mode when building code that uses or depends on code containing generics. reflect.Value: A reflect.Value is modelled very similar to an interface{}, i.e. as a pointer exclusively to tagged objects, but with two generalizations. 1. a reflect.Value that represents an lvalue points to an indirect (obj.flags ⊇ {otIndirect}) tagged object, which has a similar layout to an tagged object except that the value is a pointer to the dynamic type. Indirect tagged objects preserve the correct aliasing so that mutations made by (reflect.Value).Set can be observed. Indirect objects only arise when an lvalue is derived from an rvalue by indirection, e.g. the following code: Whether indirect or not, the concrete type of the tagged object corresponds to the user-visible dynamic type, and the existence of a pointer is an implementation detail. (NB: indirect tagged objects are not yet implemented) 2. The dynamic type tag of a tagged object pointed to by a reflect.Value may be an interface type; it need not be concrete. This arises in code such as this: pts(eface) is a singleton containing an interface{}-tagged object. That tagged object's payload is an interface{} value, i.e. the pts of the payload contains only concrete-tagged objects, although in this example it's the zero interface{} value, so its pts is empty. reflect.Type: Just as in the real "reflect" library, we represent a reflect.Type as an interface whose sole implementation is the concrete type, *reflect.rtype. (This choice is forced on us by go/types: clients cannot fabricate types with arbitrary method sets.) rtype instances are canonical: there is at most one per dynamic type. (rtypes are in fact large structs but since identity is all that matters, we represent them by a single node.) The payload of each *rtype-tagged object is an *rtype pointer that points to exactly one such canonical rtype object. We exploit this by setting the node.typ of the payload to the dynamic type, not '*rtype'. This saves us an indirection in each resolution rule. As an optimisation, *rtype-tagged objects are canonicalized too. Aggregate types: Aggregate types are treated as if all directly contained aggregates are recursively flattened out. Structs: *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create The nodes of a struct consist of a special 'identity' node (whose type is that of the struct itself), followed by the nodes for all the struct's fields, recursively flattened out. A pointer to the struct is a pointer to its identity node. That node allows us to distinguish a pointer to a struct from a pointer to its first field. Field offsets are logical field offsets (plus one for the identity node), so the sizes of the fields can be ignored by the analysis. (The identity node is non-traditional but enables the distinction described above, which is valuable for code comprehension tools. Typical pointer analyses for C, whose purpose is compiler optimization, must soundly model unsafe.Pointer (void*) conversions, and this requires fidelity to the actual memory layout using physical field offsets.) *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create Arrays: We model an array by an identity node (whose type is that of the array itself) followed by a node representing all the elements of the array; the analysis does not distinguish elements with different indices. Effectively, an array is treated like struct{elem T}, a load y=x[i] like y=x.elem, and a store x[i]=y like x.elem=y; the index i is ignored. A pointer to an array is pointer to its identity node. (A slice is also a pointer to an array's identity node.) The identity node allows us to distinguish a pointer to an array from a pointer to one of its elements, but it is rather costly because it introduces more offset constraints into the system. Furthermore, sound treatment of unsafe.Pointer would require us to dispense with this node. Arrays may be allocated by Alloc, by make([]T), by calls to append, and via reflection. Tuples (T, ...): Tuples are treated like structs with naturally numbered fields. *ssa.Extract is analogous to *ssa.Field. However, tuples have no identity field since by construction, they cannot be address-taken. There are three kinds of function call: Cases 1 and 2 apply equally to methods and standalone functions. Static calls: A static call consists three steps: A static function call is little more than two struct value copies between the P/R blocks of caller and callee: Context sensitivity: Static calls (alone) may be treated context sensitively, i.e. each callsite may cause a distinct re-analysis of the callee, improving precision. Our current context-sensitivity policy treats all intrinsics and getter/setter methods in this manner since such functions are small and seem like an obvious source of spurious confluences, though this has not yet been evaluated. Dynamic function calls: Dynamic calls work in a similar manner except that the creation of copy edges occurs dynamically, in a similar fashion to a pair of struct copies in which the callee is indirect: (Recall that the function object's P- and R-blocks are contiguous.) Interface method invocation: For invoke-mode calls, we create a params/results block for the callsite and attach a dynamic closure rule to the interface. For each new tagged object that flows to the interface, we look up the concrete method, find its function object, and connect its P/R blocks to the callsite's P/R blocks, adding copy edges to the graph during solving. Recording call targets: The analysis notifies its clients of each callsite it encounters, passing a CallSite interface. Among other things, the CallSite contains a synthetic constraint variable ("targets") whose points-to solution includes the set of all function objects to which the call may dispatch. It is via this mechanism that the callgraph is made available. Clients may also elect to be notified of callgraph edges directly; internally this just iterates all "targets" variables' pts(·)s. We implement Hash-Value Numbering (HVN), a pre-solver constraint optimization described in Hardekopf & Lin, SAS'07. This is documented in more detail in hvn.go. We intend to add its cousins HR and HU in future. The solver is currently a naive Andersen-style implementation; it does not perform online cycle detection, though we plan to add solver optimisations such as Hybrid- and Lazy- Cycle Detection from (Hardekopf & Lin, PLDI'07). It uses difference propagation (Pearce et al, SQC'04) to avoid redundant re-triggering of closure rules for values already seen. Points-to sets are represented using sparse bit vectors (similar to those used in LLVM and gcc), which are more space- and time-efficient than sets based on Go's built-in map type or dense bit vectors. Nodes are permuted prior to solving so that object nodes (which may appear in points-to sets) are lower numbered than non-object (var) nodes. This improves the density of the set over which the PTSs range, and thus the efficiency of the representation. Partly thanks to avoiding map iteration, the execution of the solver is 100% deterministic, a great help during debugging. Andersen, L. O. 1994. Program analysis and specialization for the C programming language. Ph.D. dissertation. DIKU, University of Copenhagen. David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Efficient field-sensitive pointer analysis for C. In Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE '04). ACM, New York, NY, USA, 37-42. http://doi.acm.org/10.1145/996821.996835 David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Online Cycle Detection and Difference Propagation: Applications to Pointer Analysis. Software Quality Control 12, 4 (December 2004), 311-337. http://dx.doi.org/10.1023/B:SQJO.0000039791.93071.a2 David Grove and Craig Chambers. 2001. A framework for call graph construction algorithms. ACM Trans. Program. Lang. Syst. 23, 6 (November 2001), 685-746. http://doi.acm.org/10.1145/506315.506316 Ben Hardekopf and Calvin Lin. 2007. The ant and the grasshopper: fast and accurate pointer analysis for millions of lines of code. In Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation (PLDI '07). ACM, New York, NY, USA, 290-299. http://doi.acm.org/10.1145/1250734.1250767 Ben Hardekopf and Calvin Lin. 2007. Exploiting pointer and location equivalence to optimize pointer analysis. In Proceedings of the 14th international conference on Static Analysis (SAS'07), Hanne Riis Nielson and Gilberto Filé (Eds.). Springer-Verlag, Berlin, Heidelberg, 265-280. Atanas Rountev and Satish Chandra. 2000. Off-line variable substitution for scaling points-to analysis. In Proceedings of the ACM SIGPLAN 2000 conference on Programming language design and implementation (PLDI '00). ACM, New York, NY, USA, 47-56. DOI=10.1145/349299.349310 http://doi.acm.org/10.1145/349299.349310 This program demonstrates how to use the pointer analysis to obtain a conservative call-graph of a Go program. It also shows how to compute the points-to set of a variable, in this case, (C).f's ch parameter.
Package ssmincidents provides the API client, operations, and parameter types for AWS Systems Manager Incident Manager. Systems Manager Incident Manager is an incident management console designed to help users mitigate and recover from incidents affecting their Amazon Web Services-hosted applications. An incident is any unplanned interruption or reduction in quality of services. Incident Manager increases incident resolution by notifying responders of impact, highlighting relevant troubleshooting data, and providing collaboration tools to get services back up and running. To achieve the primary goal of reducing the time-to-resolution of critical incidents, Incident Manager automates response plans and enables responder team escalation.
Package ssmcontacts provides the API client, operations, and parameter types for AWS Systems Manager Incident Manager Contacts. Systems Manager Incident Manager is an incident management console designed to help users mitigate and recover from incidents affecting their Amazon Web Services-hosted applications. An incident is any unplanned interruption or reduction in quality of services. Incident Manager increases incident resolution by notifying responders of impact, highlighting relevant troubleshooting data, and providing collaboration tools to get services back up and running. To achieve the primary goal of reducing the time-to-resolution of critical incidents, Incident Manager automates response plans and enables responder team escalation.
Package event provides low-cost tracing, metrics and structured logging. These are often grouped under the term "observability". This package is highly experimental and in a state of flux; it is public only to allow for collaboration on the design and implementation, and may be changed or removed at any time. It uses a common event system to provide a way for libraries to produce observability information in a way that does not tie the libraries to a specific API or applications to a specific export format. It is designed for minimal overhead when no exporter is used so that it is safe to leave calls in libraries.
Package spacelog is a collection of interface lego bricks designed to help you build a flexible logging system. spacelog is loosely inspired by the Python logging library. The basic interaction is between a Logger and a Handler. A Logger is what the programmer typically interacts with for creating log messages. A Logger will be at a given log level, and if log messages can clear that specific logger's log level filter, they will be passed off to the Handler. Loggers are instantiated from GetLogger and GetLoggerNamed. A Handler is a very generic interface for handling log events. You can provide your own Handler for doing structured JSON output or colorized output or countless other things. Provided are a simple TextHandler with a variety of log event templates and TextOutput sinks, such as io.Writer, Syslog, and so forth. Make sure to see the source of the setup subpackage for an example of easy and configurable logging setup at process start:
Package crypto provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. The cornerstone of this toolbox is the 'abstract' sub-package, which defines abstract interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkit's public-key crypto API includes an abstract.Group interface generically supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can thus write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The abstract group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. The abstract.Suite interface builds further on the abstract.Group API to represent an abstraction of entire pluggable ciphersuites, which include a group (e.g., curve) suitable for advanced public-key crypto together with a suitably matched set of symmetric-key crypto algorithms. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a specified source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. See below for more complete examples. Various sub-packages provide several specific implementations of these abstract cryptographic interfaces. In particular, the 'nist' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms, and of NIST-standardized elliptic curves built on the Go crypto library. The 'edwards' sub-package provides the abstract group interface using more recent Edwards curves, including the popular Ed25519 curve. The 'openssl' sub-package offers an alternative implementation of NIST-standardized elliptic curves and symmetric-key algorithms, built as wrappers around OpenSSL's crypto library. Other sub-packages build more interesting high-level cryptographic tools atop these abstract primitive interfaces, including: - poly: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - anon: Anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended the use of developers who are at least moderately knowledgeable about crypto. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then the NaCl/Sodium pursues this worthy goal (http://doc.libsodium.org). This toolkit's purpose is to make it possible - and preferably but not necessarily easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (http://charm-crypto.com/). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This example illustrates how to use the crypto toolkit's abstract group API to perform basic Diffie-Hellman key exchange calculations, using the NIST-standard P256 elliptic curve in this case. Any other suitable elliptic curve or other cryptographic group may be used simply by changing the first line that picks the suite. This example illustrates how the crypto toolkit may be used to perform "pure" ElGamal encryption, in which the message to be encrypted is small enough to be embedded directly within a group element (e.g., in an elliptic curve point). For basic background on ElGamal encryption see for example http://en.wikipedia.org/wiki/ElGamal_encryption. Most public-key crypto libraries tend not to support embedding data in points, in part because for "vanilla" public-key encryption you don't need it: one would normally just generate an ephemeral Diffie-Hellman secret and use that to seed a symmetric-key crypto algorithm such as AES, which is much more efficient per bit and works for arbitrary-length messages. However, in many advanced public-key crypto algorithms it is often useful to be able to embedded data directly into points and compute with them: as just one of many examples, the proactively verifiable anonymous messaging scheme prototyped in Verdict (see http://dedis.cs.yale.edu/dissent/papers/verdict-abs). For fancier versions of ElGamal encryption implemented in this toolkit see for example anon.Encrypt, which encrypts a message for one of several possible receivers forming an explicit anonymity set.
Package securejoin implements a set of helpers to make it easier to write Go code that is safe against symlink-related escape attacks. The primary idea is to let you resolve a path within a rootfs directory as if the rootfs was a chroot. securejoin has two APIs, a "legacy" API and a "modern" API. The legacy API is SecureJoin and SecureJoinVFS. These methods are **not** safe against race conditions where an attacker changes the filesystem after (or during) the SecureJoin operation. The new API is made up of OpenInRoot and MkdirAll (and derived functions). These are safe against racing attackers and have several other protections that are not provided by the legacy API. There are many more operations that most programs expect to be able to do safely, but we do not provide explicit support for them because we want to encourage users to switch to [libpathrs](https://github.com/openSUSE/libpathrs) which is a cross-language next-generation library that is entirely designed around operating on paths safely. securejoin has been used by several container runtimes (Docker, runc, Kubernetes, etc) for quite a few years as a de-facto standard for operating on container filesystem paths "safely". However, most users still use the legacy API which is unsafe against various attacks (there is a fairly long history of CVEs in dependent as a result). Users should switch to the modern API as soon as possible (or even better, switch to libpathrs). This project was initially intended to be included in the Go standard library, but [it was rejected](https://go.dev/issue/20126). There is now a [new Go proposal](https://go.dev/issue/67002) for a safe path resolution API that shares some of the goals of filepath-securejoin. However, that design is intended to work like `openat2(RESOLVE_BENEATH)` which does not fit the usecase of container runtimes and most system tools.
A process model for go. Ifrit is a small set of interfaces for composing single-purpose units of work into larger programs. Users divide their program into single purpose units of work, each of which implements the `Runner` interface Each `Runner` can be invoked to create a `Process` which can be monitored and signaled to stop. The name Ifrit comes from a type of daemon in arabic folklore. It's a play on the unix term 'daemon' to indicate a process that is managed by the init system. Ifrit ships with a standard library which contains packages for common processes - http servers, integration test helpers - alongside packages which model process supervision and orchestration. These packages can be combined to form complex servers which start and shutdown cleanly. The advantage of small, single-responsibility processes is that they are simple, and thus can be made reliable. Ifrit's interfaces are designed to be free of race conditions and edge cases, allowing larger orcestrated process to also be made reliable. The overall effect is less code and more reliability as your system grows with grace.
Package crypto provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. The cornerstone of this toolbox is the 'abstract' sub-package, which defines abstract interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkit's public-key crypto API includes an abstract.Group interface generically supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can thus write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The abstract group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. The abstract.Suite interface builds further on the abstract.Group API to represent an abstraction of entire pluggable ciphersuites, which include a group (e.g., curve) suitable for advanced public-key crypto together with a suitably matched set of symmetric-key crypto algorithms. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a specified source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. See below for more complete examples. Various sub-packages provide several specific implementations of these abstract cryptographic interfaces. In particular, the 'nist' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms, and of NIST-standardized elliptic curves built on the Go crypto library. The 'edwards' sub-package provides the abstract group interface using more recent Edwards curves, including the popular Ed25519 curve. The 'openssl' sub-package offers an alternative implementation of NIST-standardized elliptic curves and symmetric-key algorithms, built as wrappers around OpenSSL's crypto library. Other sub-packages build more interesting high-level cryptographic tools atop these abstract primitive interfaces, including: - poly: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - anon: Anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended the use of developers who are at least moderately knowledgeable about crypto. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then the NaCl/Sodium pursues this worthy goal (http://doc.libsodium.org). This toolkit's purpose is to make it possible - and preferably but not necessarily easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (http://charm-crypto.com/). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This example illustrates how to use the crypto toolkit's abstract group API to perform basic Diffie-Hellman key exchange calculations, using the NIST-standard P256 elliptic curve in this case. Any other suitable elliptic curve or other cryptographic group may be used simply by changing the first line that picks the suite. This example illustrates how the crypto toolkit may be used to perform "pure" ElGamal encryption, in which the message to be encrypted is small enough to be embedded directly within a group element (e.g., in an elliptic curve point). For basic background on ElGamal encryption see for example http://en.wikipedia.org/wiki/ElGamal_encryption. Most public-key crypto libraries tend not to support embedding data in points, in part because for "vanilla" public-key encryption you don't need it: one would normally just generate an ephemeral Diffie-Hellman secret and use that to seed a symmetric-key crypto algorithm such as AES, which is much more efficient per bit and works for arbitrary-length messages. However, in many advanced public-key crypto algorithms it is often useful to be able to embedded data directly into points and compute with them: as just one of many examples, the proactively verifiable anonymous messaging scheme prototyped in Verdict (see http://dedis.cs.yale.edu/dissent/papers/verdict-abs). For fancier versions of ElGamal encryption implemented in this toolkit see for example anon.Encrypt, which encrypts a message for one of several possible receivers forming an explicit anonymity set.
Package mempool provides a policy-enforced pool of unmined Decred transactions. A key responsibility of the Decred network is mining transactions – regular transactions and stake transactions – into blocks. In order to facilitate this, the mining process relies on having a readily-available source of transactions to include in a block that is being solved. At a high level, this package satisfies that requirement by providing an in-memory pool of fully validated transactions that can also optionally be further filtered based upon a configurable policy. The Policy configuration options has flags that control whether or not "standard" transactions and old votes are accepted into the mempool. In essence, a "standard" transaction is one that satisfies a fairly strict set of requirements that are largely intended to help provide fair use of the system to all users. It is important to note that what is considered to be a "standard" transaction changes over time as policy and consensus rules evolve. For some insight, at the time of this writing, an example of _some_ of the criteria that are required for a transaction to be considered standard are that it is of the most-recently supported version, finalized, does not exceed a specific size, and only consists of specific script forms. Since this package does not deal with other Decred specifics such as network communication and transaction relay, it returns a list of transactions that were accepted which gives the caller a high level of flexibility in how they want to proceed. Typically, this will involve things such as relaying the transactions to other peers on the network and notifying the mining process that new transactions are available. This package has intentionally been designed so it can be used as a standalone package for any projects needing the ability create an in-memory pool of Decred transactions that are not only valid by consensus rules, but also adhere to a configurable policy ## Feature Overview The following is a quick overview of the major features. It is not intended to be an exhaustive list. - Maintain a pool of fully validated transactions - Stake transaction support (ticket purchases, votes and revocations) - Orphan transaction support (transactions that spend from unknown outputs) - Configurable transaction acceptance policy - Additional metadata tracking for each transaction - Manual control of transaction removal Errors returned by this package are either the raw errors provided by underlying calls or of type mempool.RuleError. Since there are two classes of rules (mempool acceptance rules and blockchain (consensus) acceptance rules), the mempool.RuleError type contains a single Err field which will, in turn, either be a mempool.TxRuleError or a blockchain.RuleError. The first indicates a violation of mempool acceptance rules while the latter indicates a violation of consensus acceptance rules. This allows the caller to easily differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by type asserting the Err field to one of the aforementioned types and examining their underlying ErrorCode field.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This library is offered as-is, and without a guarantee. It will need an independent security review before it should be considered ready for use in security-critical applications. If you integrate Kyber into your application it is YOUR RESPONSIBILITY to arrange for that audit. If you notice a possible security problem, please report it to dedis-security@epfl.ch.
Package workdocs provides the API client, operations, and parameter types for Amazon WorkDocs. The Amazon WorkDocs API is designed for the following use cases: File Migration: File migration applications are supported for users who want to migrate their files from an on-premises or off-premises file system or service. Users can insert files into a user directory structure, as well as allow for basic metadata changes, such as modifications to the permissions of files. Security: Support security applications are supported for users who have additional security needs, such as antivirus or data loss prevention. The API actions, along with CloudTrail, allow these applications to detect when changes occur in Amazon WorkDocs. Then, the application can take the necessary actions and replace the target file. If the target file violates the policy, the application can also choose to email the user. eDiscovery/Analytics: General administrative applications are supported, such as eDiscovery and analytics. These applications can choose to mimic or record the actions in an Amazon WorkDocs site, along with CloudTrail, to replicate data for eDiscovery, backup, or analytical applications. All Amazon WorkDocs API actions are Amazon authenticated and certificate-signed. They not only require the use of the Amazon Web Services SDK, but also allow for the exclusive use of IAM users and roles to help facilitate access, trust, and permission policies. By creating a role and allowing an IAM user to access the Amazon WorkDocs site, the IAM user gains full administrative visibility into the entire Amazon WorkDocs site (or as set in the IAM policy). This includes, but is not limited to, the ability to modify file permissions and upload any file to any user. This allows developers to perform the three use cases above, as well as give users the ability to grant access on a selective basis using the IAM model. The pricing for Amazon WorkDocs APIs varies depending on the API call type for these actions: READ (Get*) WRITE (Activate*, Add*, Create*, Deactivate*, Initiate*, Update*) LIST (Describe*) DELETE*, CANCEL For information about Amazon WorkDocs API pricing, see Amazon WorkDocs Pricing.
Package amqp is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the sychronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.Publish or Channel.Consume. SSL/TLS - Secure connections When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate dial your own connection. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html This exports a Session object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and `rabbitmq-server` in another. Stop & restart RabbitMQ to see how the queue reacts.
Package datastore has an abstract representation of (AppEngine | Cloud) Datastore. repository https://github.com/mercari/datastore Let's read https://cloud.google.com/datastore/docs/ or https://cloud.google.com/appengine/docs/standard/go/datastore/ . You should also check https://godoc.org/cloud.google.com/go/datastore or https://godoc.org/google.golang.org/appengine/datastore as datastore original library. Japanese version https://github.com/mercari/datastore/blob/master/doc_ja.go Please see https://godoc.org/go.mercari.io/datastore/clouddatastore or https://godoc.org/go.mercari.io/datastore/aedatastore . Create a Client using the FromContext function of each package. Later in this document, notes on migration from each package are summarized. Please see also there. This package is based on the newly designed Cloud Datastore API. We are introducing flatten tags that only exist in Cloud Datastore, we need to be careful when migrating from AE Datastore. Details will be described later. If you are worried, you may have a clue to the solution at https://godoc.org/go.mercari.io/datastore/clouddatastore . This package has three main objectives. We are forced to make functions that are not directly related to the value of the application for speed, stability and operation. Such functions can be abstracted and used as middleware. Let's think about this case. Put Entity to Datastore and set it to Memcache or Redis. Next, when getting from Datastore, Get from Memcache first, Get it again from Datastore if it fails. It is very troublesome to provide these operations for all Kind and all Entity operations. However, if the middleware intervenes with all Datastore RPCs, you can transparently process without affecting the application code. As another case, RPC sometimes fails. If it fails, the process often succeeds simply by retrying. For easy RET retry with all RPCs, it is better to implement it as middleware. Please refer to https://godoc.org/go.mercari.io/datastore/dsmiddleware if you want to know the middleware already provided. The same interface is provided for AppEngine Datastore and Cloud Datastore. These two are compatible, you can run it with exactly the same code after creating the Client. For example, you can use AE Datastore in a production environment and Cloud Datastore Emulator in UnitTest. If you can avoid goapp, tests may be faster and IDE may be more vulnerable to debugging. You can also read data from the local environment via Cloud Datastore for systems running on AE Datastore. Caution. Although the storage bodies of RPCs of AE Datastore and Cloud Datastore are shared, there is a difference in expressiveness at the API level. Please carefully read the data written in AE Datastore carelessly on Cloud Datastore and do not update it. It may become impossible to read from the API of AE Datastore side. About this, we have not strictly tested. The operation of Datastore has very little latency with respect to RPC's network. When acquiring 10 entities it means that GetMulti one time is better than getting 10 times using loops. However, we are not good at putting together multiple processes at once. Suppose, for example, you want to query on Post Kind, use the list of Comment IDs of the resulting Post, and get a list of Comments. For example, you can query Post Kind and get a list of Post. In addition, consider using CommentIDs of Post and getting a list of Comment. This is enough Query + 1 GetMulti is enough if you write very clever code. However, after acquiring the data, it is necessary to link the Comment list with the appropriate Post. On the other hand, you can easily write a code that throws a query once and then GetMulti the Comment as many as Post. In summary, it is convenient to have Put or Get queued, and there is a mechanism to execute it collectively later. Batch() is it! You can find the example at https://godoc.org/go.mercari.io/datastore/#pkg-examples . I love goon. So I made https://godoc.org/go.mercari.io/datastore/boom which can be used in conjunction with this package. Here's an overview of what you need to do to migrate your existing code. from AE Datastore from Cloud Datastore from goon to boom
Package mempool provides a policy-enforced pool of unmined Decred transactions. A key responsibility of the Decred network is mining transactions – regular transactions and stake transactions – into blocks. In order to facilitate this, the mining process relies on having a readily-available source of transactions to include in a block that is being solved. At a high level, this package satisfies that requirement by providing an in-memory pool of fully validated transactions that can also optionally be further filtered based upon a configurable policy. The Policy configuration options has flags that control whether or not "standard" transactions and old votes are accepted into the mempool. In essence, a "standard" transaction is one that satisfies a fairly strict set of requirements that are largely intended to help provide fair use of the system to all users. It is important to note that what is considered to be a "standard" transaction changes over time as policy and consensus rules evolve. For some insight, at the time of this writing, an example of _some_ of the criteria that are required for a transaction to be considered standard are that it is of the most-recently supported version, finalized, does not exceed a specific size, and only consists of specific script forms. Since this package does not deal with other Decred specifics such as network communication and transaction relay, it returns a list of transactions that were accepted which gives the caller a high level of flexibility in how they want to proceed. Typically, this will involve things such as relaying the transactions to other peers on the network and notifying the mining process that new transactions are available. This package has intentionally been designed so it can be used as a standalone package for any projects needing the ability create an in-memory pool of Decred transactions that are not only valid by consensus rules, but also adhere to a configurable policy ## Feature Overview The following is a quick overview of the major features. It is not intended to be an exhaustive list. - Maintain a pool of fully validated transactions - Stake transaction support (ticket purchases, votes and revocations) - Orphan transaction support (transactions that spend from unknown outputs) - Configurable transaction acceptance policy - Additional metadata tracking for each transaction - Manual control of transaction removal Errors returned by this package are either the raw errors provided by underlying calls or of type mempool.RuleError. Since there are two classes of rules (mempool acceptance rules and blockchain (consensus) acceptance rules), the mempool.RuleError type contains a single Err field which will, in turn, either be a mempool.TxRuleError or a blockchain.RuleError. The first indicates a violation of mempool acceptance rules while the latter indicates a violation of consensus acceptance rules. This allows the caller to easily differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by type asserting the Err field to one of the aforementioned types and examining their underlying ErrorCode field.
Package neuron is the cloud-native, distributed ORM implementation. It's design allows to use the separate repository for each model, with a possibility to have different relationships types between them. Neuron-core consists of following packages: neuron - (Neuron Core) the root package that gives easy access to all subpackages. . controller - is the neuron's core, that registers and stores the models and contains configurations required by other packages. config - contains the configurations for all packages. query - used to create queries, filters, sort, pagination on base of mapped models. mapping - contains the information about the mapped models their fields and settings. class - contains errors classification system for the neuron packages. log - is the logging interface for the neuron based applications. i18n - is the neuron based application supported internationalization. repository - is a package used to store and register the repositories. It is also used to get the repository/factory per model. A modular design allows to use and compile only required repositories.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages.
Package hotkey provides the basic facility to register a system-level global hotkey shortcut so that an application can be notified if a user triggers the desired hotkey. A hotkey must be a combination of modifiers and a single key. Note platform specific details: On macOS, due to the OS restriction (other platforms does not have this restriction), hotkey events must be handled on the "main thread". Therefore, in order to use this package properly, one must start an OS main event loop on the main thread, For self-contained applications, using mainthread package. is possible. It is uncessary or applications based on other GUI frameworks, such as fyne, ebiten, or Gio. See the "examples" for more examples. On Linux (X11), when AutoRepeat is enabled in the X server, the Keyup is triggered automatically and continuously as Keydown continues. On Linux (X11), some keys may be mapped to multiple Mod keys. To correctly register the key combination, one must use the correct underlying keycode combination. For example, a regular Ctrl+Alt+S might be registered as: Ctrl+Mod2+Mod4+S. If this package did not include a desired key, one can always provide the keycode to the API. For example, if a key code is 0x15, then the corresponding key is `hotkey.Key(0x15)`. THe following is a minimum example:
Package elmobd provides communication with cars OBD-II system using ELM327 based USB-devices. Using this library and a ELM327-based USB-device you can communicate with your cars on-board diagnostics system to read sensor data. Reading trouble codes and resetting them is not yet implemented. All assumptions this library makes are based on the official Elm Electronics datasheet of the ELM327 IC: https://www.elmelectronics.com/wp-content/uploads/2017/01/ELM327DS.pdf After that introduction - Welcome! I hope you'll find this library useful and its documentation easy to digest. If that's not the case, please create a GitHub-issue: https://github.com/rzetterberg/elmobd/issues You'll note that this package has the majority its types and functions exported. The reason for that is that there are many different commands you can send to a ELM327 device, and this library will never we able to define all commands, so most functionality is exported so that you can easily extend it. You'll also note that there are A LOT of types. The reason for this is each command that can be sent to the ELM327 device is defined as it's own type. The approach I wanted to take with this library was to get as much type safety as possible. With that said, as an end user of this library you'll only need to know about two kinds types: the Device type and types that implement the OBDCommand interface. The Device type represents an active connection with an ELM327 device. You plug in your device into your computer, get the path to the device and initialize a new device: This library is design to be as high level as possible, you shouldn't need to handle baud rates, setting protocols or anything like that. After the Device has been initialized you can just start sending commands. The function you will be using the most is the Device.RunOBDCommand, which accepts a command, sends the command to the device, waits for a response, parses the response and gives it back to you. Which means this command is blocking execution until it has been finished. The default timeout is currently set to 5 seconds. The RunOBDCommand accepts a single argument, a type that implements the OBDCommand interface. That interface has a couple of functions that needs to exist in order to be able to generate the low-level request for the device and parse the low-level response. Basically you send in an empty OBDCommand and get back a OBDCommand with the processed value retrieved from the device. Suppose that after checking the device connection (in our example above) we would like to retrieve the vehicles speed. There's a type called VehicleSpeed that implements the OBDCommand interface that we can use. We start by creating a new VehicleSpeed using its constructor NewVehicleSpeed that we then give to RunOBDCommand: At the moment there are around 15 sensor commands defined in this library. They are all defined in the file commands.go of the library (https://github.com/rzetterberg/elmobd/blob/master/commands.go). If you look in the end of that file you'll find an internal static variable with all the sensor commands called sensorCommands. That's the basics of this library - you create a device connection and then run commands. Documentation of more advanced use-cases is in the works, so don't worry! If there's something that you wish would be documented, or something that is not clear in the current documentation, please create a GitHub-issue.