Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/master/_examples Validator is designed to be thread-safe and used as a singleton instance. It caches information about your struct and validations, in essence only parsing your validation tags once per struct type. Using multiple instances neglects the benefit of caching. The not thread-safe functions are explicitly marked as such in the documentation. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rgb|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. Allows to skip the validation if the value is nil (same as omitempty, but only for the nil-values). This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For booleans ensures value is not false. For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value when using WithRequiredStructEnabled. The field under validation must be present and not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must not be present or not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must not be present or empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, len will ensure that the value is equal to the duration given in the parameter. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, max will ensure that the value is less than or equal to the duration given in the parameter. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, min will ensure that the value is greater than or equal to the duration given in the parameter. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, eq will ensure that the value is equal to the duration given in the parameter. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, ne will ensure that the value is not equal to the duration given in the parameter. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. To match strings with spaces in them, include the target string between single quotes. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gt will ensure that the value is greater than the duration given in the parameter. Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gte will ensure that the value is greater than or equal to the duration given in the parameter. For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lt will ensure that the value is less than the duration given in the parameter. Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lte will ensure that the value is less than or equal to the duration given in the parameter. This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value can successfully be parsed into a boolean with strconv.ParseBool This validates that a string value contains number values only. For integers or float it returns true. This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains only lowercase characters. An empty string is not a valid lowercase string. This validates that a string value contains only uppercase characters. An empty string is not a valid uppercase string. This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid E.164 Phone number https://en.wikipedia.org/wiki/E.164 (ex. +1123456789) This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value is valid JSON This validates that a string value is a valid JWT This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid file path and that the file exists on the machine and is an image. This is done using os.Stat and github.com/gabriel-vasile/mimetype This validates that a string value contains a valid file path but does not validate the existence of that file. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid bas324 value. Although an empty string is valid base32 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value, but without = padding, according the RFC4648 spec, section 3.2. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providing reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value does not start with the supplied string value This validates that a string value does not end with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains a valid ULID value. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid directory but does not validate the existence of that directory. This is done using os.Stat, which is a platform independent function. It is safest to suffix the string with os.PathSeparator if the directory may not exist at the time of validation. This validates that a string value contains a valid DNS hostname and port that can be used to validate fields typically passed to sockets and connections. This validates that a string value is a valid datetime based on the supplied datetime format. Supplied format must match the official Go time format layout as documented in https://golang.org/pkg/time/ This validates that a string value is a valid country code based on iso3166-1 alpha-2 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-3 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-numeric standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid BCP 47 language tag, as parsed by language.Parse. More information on https://pkg.go.dev/golang.org/x/text/language BIC (SWIFT code) This validates that a string value is a valid Business Identifier Code (SWIFT code), defined in ISO 9362. More information on https://www.iso.org/standard/60390.html This validates that a string value is a valid dns RFC 1035 label, defined in RFC 1035. More information on https://datatracker.ietf.org/doc/html/rfc1035 This validates that a string value is a valid time zone based on the time zone database present on the system. Although empty value and Local value are allowed by time.LoadLocation golang function, they are not allowed by this validator. More information on https://golang.org/pkg/time/#LoadLocation This validates that a string value is a valid semver version, defined in Semantic Versioning 2.0.0. More information on https://semver.org/ This validates that a string value is a valid cve id, defined in cve mitre. More information on https://cve.mitre.org/ This validates that a string value contains a valid credit card number using Luhn algorithm. This validates that a string or (u)int value contains a valid checksum using the Luhn algorithm. This validates that a string is a valid 24 character hexadecimal string or valid connection string. Example: This validates that a string value contains a valid cron expression. This validates that a string is valid for use with SpiceDb for the indicated purpose. If no purpose is given, a purpose of 'id' is assumed. Alias Validators and Tags NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package cloud contains a library and tools for open cloud development in Go. The Go Cloud Development Kit (Go CDK) allows application developers to seamlessly deploy cloud applications on any combination of cloud providers. It does this by providing stable, idiomatic interfaces for common uses like storage and databases. Think `database/sql` for cloud products. At the core of the Go CDK are common "portable types", implemented on top of service-specific drivers for supported cloud services. For example, objects of the blob.Bucket portable type can be created using gcsblob.OpenBucket, s3blob.OpenBucket, or any other Go CDK driver. Then, the blob.Bucket can be used throughout your application without worrying about the underlying implementation. The Go CDK works well with a code generator called Wire (https://github.com/google/wire/blob/master/README.md). It creates human-readable code that only imports the cloud SDKs for drivers you use. This allows the Go CDK to grow to support any number of cloud services, without increasing compile times or binary sizes, and avoiding any side effects from `init()` functions. For non-reference documentation, see https://gocloud.dev/ See https://gocloud.dev/concepts/urls/ for a discussion of URLs in the Go CDK. See https://gocloud.dev/concepts/as/ for a discussion of how to write service-specific code with the Go CDK.
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a Client: The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. You may configure the client by passing in options from the google.golang.org/api/option package. You may also use options defined in this package, such as WithJSONReads. If you only wish to access public data, you can create an unauthenticated client with To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual: Please note that there is no official emulator for Cloud Storage. A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use BucketHandle.Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with ObjectHandle.Attrs: Listing objects in a bucket is done with the BucketHandle.Objects method: Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used: If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs. To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, use BucketHandle.SignedURL. You can also sign a URL without creating a client. See the documentation of SignedURL for details. A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user. For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4. If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL and BucketHandle.GenerateSignedPostPolicyV4 if any of the following are true: Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient. In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account. To generate the signature, you must have: Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example: Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation. The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docs for more information. Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example: You can add custom headers to any API call made by this package by using callctx.SetHeaders on the context which is passed to the method. For example, to add a custom audit logging header: This package includes support for the Cloud Storage gRPC API. The implementation uses gRPC rather than the Default JSON & XML APIs to make requests to Cloud Storage. The Go Storage gRPC client is generally available. The Notifications, Serivce Account HMAC and GetServiceAccount RPCs are not supported through the gRPC client. To create a client which will use gRPC, use the alternate constructor: Using the gRPC API inside GCP with a bucket in the same region can allow for Direct Connectivity (enabling requests to skip some proxy steps and reducing response latency). A warning is emmitted if gRPC is not used within GCP to warn that Direct Connectivity could not be initialized. Direct Connectivity is not required to access the gRPC API. Dependencies for the gRPC API may slightly increase the size of binaries for applications depending on this package. If you are not using gRPC, you can use the build tag `disable_grpc_modules` to opt out of these dependencies and reduce the binary size. The gRPC client emits metrics by default and will export the gRPC telemetry discussed in gRFC/66 and gRFC/78 to Google Cloud Monitoring. The metrics are accessible through Cloud Monitoring API and you incur no additional cost for publishing the metrics. Google Cloud Support can use this information to more quickly diagnose problems related to GCS and gRPC. Sending this data does not incur any billing charges, and requires minimal CPU (a single RPC every minute) or memory (a few KiB to batch the telemetry). To access the metrics you can view them through Cloud Monitoring metric explorer with the prefix `storage.googleapis.com/client`. Metrics are emitted every minute. You can disable metrics using the following example when creating a new gRPC client using WithDisabledClientMetrics. The metrics exporter uses Cloud Monitoring API which determines project ID and credentials doing the following: * Project ID is determined using OTel Resource Detector for the environment otherwise it falls back to the project provided by google.FindCredentials. * Credentials are determined using Application Default Credentials. The principal must have `roles/monitoring.metricWriter` role granted. If not a logged warning will be emitted. Subsequent are silenced to prevent noisy logs. Certain control plane and long-running operations for Cloud Storage (including Folder and Managed Folder operations) are supported via the autogenerated Storage Control client, which is available as a subpackage in this module. See package docs at cloud.google.com/go/storage/control/apiv2 or reference the Storage Control API docs.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. Why not a better error message? Because this library intends for you to handle your own error messages. Why should I handle my own errors? Many reasons. We built an internationalized application and needed to know the field, and what validation failed so we could provide a localized error. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only returns nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the default separator of validation tags. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is usefull if inside of you program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Is a special tag without a validation function attached. It is used when a field is a Pointer, Interface or Invalid and you wish to validate that it exists. Example: want to ensure a bool exists if you define the bool as a pointer and use exists it will ensure there is a value; couldn't use required as it would fail when the bool was false. exists will fail is the value is a Pointer, Interface or Invalid and is nil. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For numbers, max will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This validates that a string value contains alpha characters only This validates that a string value contains alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all posibilities. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. This validates that a string value contains a valid version 3 UUID. This validates that a string value contains a valid version 4 UUID. This validates that a string value contains a valid version 5 UUID. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Adress. This validates that a string value contains a valid v4 IP Adress. This validates that a string value contains a valid v6 IP Adress. This validates that a string value contains a valid CIDR Adress. This validates that a string value contains a valid v4 CIDR Adress. This validates that a string value contains a valid v6 CIDR Adress. This validates that a string value contains a valid resolvable TCP Adress. This validates that a string value contains a valid resolvable v4 TCP Adress. This validates that a string value contains a valid resolvable v6 TCP Adress. This validates that a string value contains a valid resolvable UDP Adress. This validates that a string value contains a valid resolvable v4 UDP Adress. This validates that a string value contains a valid resolvable v6 UDP Adress. This validates that a string value contains a valid resolvable IP Adress. This validates that a string value contains a valid resolvable v4 IP Adress. This validates that a string value contains a valid resolvable v6 IP Adress. This validates that a string value contains a valid Unix Adress. This validates that a string value contains a valid MAC Adress. Note: See Go's ParseMAC for accepted formats and types: NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. To use an emulator with this library, you can set the SPANNER_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Spanner. You can then create and use a client as usual: Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may be aborted automatically by the backend and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Cloud Spanner STRUCT (aka STRUCT) values (https://cloud.google.com/spanner/docs/data-types#struct-type) can be represented by a Go struct value. A proto StructType is built from the field types and field tag information of the Go struct. If a field in the struct type definition has a "spanner:<field_name>" tag, then the value of the "spanner" key in the tag is used as the name for that field in the built StructType, otherwise the field name in the struct definition is used. To specify a field with an empty field name in a Cloud Spanner STRUCT type, use the `spanner:""` tag annotation against the corresponding field in the Go struct's type definition. A STRUCT value can contain STRUCT-typed and Array-of-STRUCT typed fields and these can be specified using named struct-typed and []struct-typed fields inside a Go struct. However, embedded struct fields are not allowed. Unexported struct fields are ignored. NULL STRUCT values in Cloud Spanner are typed. A nil pointer to a Go struct value can be used to specify a NULL STRUCT value of the corresponding StructType. Nil and empty slices of a Go STRUCT type can be used to specify NULL and empty array values respectively of the corresponding StructType. A slice of pointers to a Go struct type can be used to specify an array of NULL-able STRUCT values. Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package otto is a JavaScript parser and interpreter written natively in Go. http://godoc.org/github.com/robertkrimen/otto Run something in the VM Get a value out of the VM Set a number Set a string Get the value of an expression An error happens Set a Go function Set a Go function that returns something useful Use the functions in JavaScript A separate parser is available in the parser package if you're just interested in building an AST. http://godoc.org/github.com/robertkrimen/otto/parser Parse and return an AST otto You can run (Go) JavaScript from the commandline with: http://github.com/robertkrimen/otto/tree/master/otto Run JavaScript by entering some source on stdin or by giving otto a filename: underscore Optionally include the JavaScript utility-belt library, underscore, with this import: For more information: http://github.com/robertkrimen/otto/tree/master/underscore The following are some limitations with otto: Go translates JavaScript-style regular expressions into something that is "regexp" compatible via `parser.TransformRegExp`. Unfortunately, RegExp requires backtracking for some patterns, and backtracking is not supported by the standard Go engine: https://code.google.com/p/re2/wiki/Syntax Therefore, the following syntax is incompatible: A brief discussion of these limitations: "Regexp (?!re)" https://groups.google.com/forum/?fromgroups=#%21topic/golang-nuts/7qgSDWPIh_E More information about re2: https://code.google.com/p/re2/ In addition to the above, re2 (Go) has a different definition for \s: [\t\n\f\r ]. The JavaScript definition, on the other hand, also includes \v, Unicode "Separator, Space", etc. If you want to stop long running executions (like third-party code), you can use the interrupt channel to do this: Where is setTimeout/setInterval? These timing functions are not actually part of the ECMA-262 specification. Typically, they belong to the `windows` object (in the browser). It would not be difficult to provide something like these via Go, but you probably want to wrap otto in an event loop in that case. For an example of how this could be done in Go with otto, see natto: http://github.com/robertkrimen/natto Here is some more discussion of the issue: * http://book.mixu.net/node/ch2.html * http://en.wikipedia.org/wiki/Reentrancy_%28computing%29 * http://aaroncrane.co.uk/2009/02/perl_safe_signals/
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values. There are two implementations; those suffixed with 'G' are generics, usable for any type, and require a passed-in "less" function to define their ordering. Those without this prefix are specific to the 'Item' interface, and use its 'Less' function for ordering.
Package kms provides the API client, operations, and parameter types for AWS Key Management Service. Key Management Service (KMS) is an encryption and key management web service. This guide describes the KMS operations that you can call programmatically. For general information about KMS, see the Key Management Service Developer Guide. KMS has replaced the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term. Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, macOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to KMS and other Amazon Web Services services. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services. We recommend that you use the Amazon Web Services SDKs to make programmatic API calls to KMS. If you need to use FIPS 140-2 validated cryptographic modules when communicating with Amazon Web Services, use the FIPS endpoint in your preferred Amazon Web Services Region. For more information about the available FIPS endpoints, see Service endpointsin the Key Management Service topic of the Amazon Web Services General Reference. All KMS API calls must be signed and be transmitted using Transport Layer Security (TLS). KMS recommends you always use the latest supported TLS version. Clients must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Requests must be signed using an access key ID and a secret access key. We strongly recommend that you do not use your Amazon Web Services account root access key ID and secret access key for everyday work. You can use the access key ID and secret access key for an IAM user or you can use the Security Token Service (STS) to generate temporary security credentials and use those to sign requests. All KMS requests must be signed with Signature Version 4. KMS supports CloudTrail, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the CloudTrail User Guide. For more information about credentials and request signing, see the following: Amazon Web Services Security Credentials Temporary Security Credentials Signature Version 4 Signing Process Of the API operations discussed in this guide, the following will prove the most useful for most applications. You will likely perform operations other than these, such as creating keys and assigning policies, by using the console.
Package secretsmanager provides the API client, operations, and parameter types for AWS Secrets Manager. Amazon Web Services Secrets Manager provides a service to enable you to store, manage, and retrieve, secrets. This guide provides descriptions of the Secrets Manager API. For more information about using this service, see the Amazon Web Services Secrets Manager User Guide. This version of the Secrets Manager API Reference documents the Secrets Manager API version 2017-10-17. For a list of endpoints, see Amazon Web Services Secrets Manager endpoints. We welcome your feedback. Send your comments to awssecretsmanager-feedback@amazon.com, or post your feedback and questions in the Amazon Web Services Secrets Manager Discussion Forum. For more information about the Amazon Web Services Discussion Forums, see Forums Help. Amazon Web Services Secrets Manager supports Amazon Web Services CloudTrail, a service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. By using information that's collected by Amazon Web Services CloudTrail, you can determine the requests successfully made to Secrets Manager, who made the request, when it was made, and so on. For more about Amazon Web Services Secrets Manager and support for Amazon Web Services CloudTrail, see Logging Amazon Web Services Secrets Manager Events with Amazon Web Services CloudTrailin the Amazon Web Services Secrets Manager User Guide. To learn more about CloudTrail, including enabling it and find your log files, see the Amazon Web Services CloudTrail User Guide.
Package kafkapubsub provides an implementation of pubsub for Kafka. It requires a minimum Kafka version of 0.11.x for Header support. Some functionality may work with earlier versions of Kafka. See https://kafka.apache.org/documentation.html#semantics for a discussion of message semantics in Kafka. sarama.Config exposes many knobs that can affect performance and semantics, so review and set them carefully. kafkapubsub does not support Message.Nack; Message.Nackable will return false, and Message.Nack will panic if called. For pubsub.OpenTopic and pubsub.OpenSubscription, kafkapubsub registers for the scheme "kafka". The default URL opener will connect to a default set of Kafka brokers based on the environment variable "KAFKA_BROKERS", expected to be a comma-delimited set of server addresses. To customize the URL opener, or for more details on the URL format, see URLOpener. See https://gocloud.dev/concepts/urls/ for background information. Go CDK supports all UTF-8 strings. No escaping is required for Kafka. Message metadata is supported through Kafka Headers, which allow arbitrary []byte for both key and value. These are converted to string for use in Message.Metadata. kafkapubsub exposes the following types for As:
Package walletdb provides a namespaced database interface for btcwallet. A wallet essentially consists of a multitude of stored data such as private and public keys, key derivation bits, pay-to-script-hash scripts, and various metadata. One of the issues with many wallets is they are tightly integrated. Designing a wallet with loosely coupled components that provide specific functionality is ideal, however it presents a challenge in regards to data storage since each component needs to store its own data without knowing the internals of other components or breaking atomicity. This package solves this issue by providing a pluggable driver, namespaced database interface that is intended to be used by the main wallet daemon. This allows the potential for any backend database type with a suitable driver. Each component, which will typically be a package, can then implement various functionality such as address management, voting pools, and colored coin metadata in their own namespace without having to worry about conflicts with other packages even though they are sharing the same database that is managed by the wallet. A quick overview of the features walletdb provides are as follows: The main entry point is the DB interface. It exposes functionality for creating, retrieving, and removing namespaces. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or commiting changes that took place while the transaction was active. It also provides the root bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root bucket is the upper-most bucket in a namespace under which data is stored and is created at the same time as the namespace. Use the RootBucket function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database, getting a namespace from it, and using a managed read-write transaction against the namespace to store and retrieve data.
Package gorocksdb provides the ability to create and access RocksDB databases. gorocksdb.OpenDb opens and creates databases. The DB struct returned by OpenDb provides DB.Get, DB.Put, DB.Merge and DB.Delete to modify and query the database. For bulk reads, use an Iterator. If you want to avoid disturbing your live traffic while doing the bulk read, be sure to call SetFillCache(false) on the ReadOptions you use when creating the Iterator. Batched, atomic writes can be performed with a WriteBatch and DB.Write. If your working dataset does not fit in memory, you'll want to add a bloom filter to your database. NewBloomFilter and BlockBasedTableOptions.SetFilterPolicy is what you want. NewBloomFilter is amount of bits in the filter to use per key in your database. If you're using a custom comparator in your code, be aware you may have to make your own filter policy object. This documentation is not a complete discussion of RocksDB. Please read the RocksDB documentation <http://rocksdb.org/> for information on its operation. You'll find lots of goodies there.
Package context stores values shared during a request lifetime. Note: gorilla/context, having been born well before `context.Context` existed, does not play well > with the shallow copying of the request that [`http.Request.WithContext`](https://golang.org/pkg/net/http/#Request.WithContext) (added to net/http Go 1.7 onwards) performs. You should either use *just* gorilla/context, or moving forward, the new `http.Request.Context()`. For example, a router can set variables extracted from the URL and later application handlers can access those values, or it can be used to store sessions values to be saved at the end of a request. There are several others common uses. The idea was posted by Brad Fitzpatrick to the go-nuts mailing list: Here's the basic usage: first define the keys that you will need. The key type is interface{} so a key can be of any type that supports equality. Here we define a key using a custom int type to avoid name collisions: Then set a variable. Variables are bound to an http.Request object, so you need a request instance to set a value: The application can later access the variable using the same key you provided: And that's all about the basic usage. We discuss some other ideas below. Any type can be stored in the context. To enforce a given type, make the key private and wrap Get() and Set() to accept and return values of a specific type: Variables must be cleared at the end of a request, to remove all values that were stored. This can be done in an http.Handler, after a request was served. Just call Clear() passing the request: ...or use ClearHandler(), which conveniently wraps an http.Handler to clear variables at the end of a request lifetime. The Routers from the packages gorilla/mux and gorilla/pat call Clear() so if you are using either of them you don't need to clear the context manually.
Package wire implements the Decred wire protocol. For the complete details of the Decred protocol, see the official wiki entry at https://en.bitcoin.it/wiki/Protocol_specification. The following only serves as a quick overview to provide information on how to use the package. At a high level, this package provides support for marshalling and unmarshalling supported Decred messages to and from the wire. This package does not deal with the specifics of message handling such as what to do when a message is received. This provides the caller with a high level of flexibility. The Decred protocol consists of exchanging messages between peers. Each message is preceded by a header which identifies information about it such as which Decred network it is a part of, its type, how big it is, and a checksum to verify validity. All encoding and decoding of message headers is handled by this package. To accomplish this, there is a generic interface for Decred messages named Message which allows messages of any type to be read, written, or passed around through channels, functions, etc. In addition, concrete implementations of most of the currently supported Decred messages are provided. For these supported messages, all of the details of marshalling and unmarshalling to and from the wire using Decred encoding are handled so the caller doesn't have to concern themselves with the specifics. The following provides a quick summary of how the Decred messages are intended to interact with one another. As stated above, these interactions are not directly handled by this package. For more in-depth details about the appropriate interactions, see the official Decred protocol wiki entry at https://en.bitcoin.it/wiki/Protocol_specification. The initial handshake consists of two peers sending each other a version message (MsgVersion) followed by responding with a verack message (MsgVerAck). Both peers use the information in the version message (MsgVersion) to negotiate things such as protocol version and supported services with each other. Once the initial handshake is complete, the following chart indicates message interactions in no particular order. There are several common parameters that arise when using this package to read and write Decred messages. The following sections provide a quick overview of these parameters so the next sections can build on them. The protocol version should be negotiated with the remote peer at a higher level than this package via the version (MsgVersion) message exchange, however, this package provides the wire.ProtocolVersion constant which indicates the latest protocol version this package supports and is typically the value to use for all outbound connections before a potentially lower protocol version is negotiated. The Decred network is a magic number which is used to identify the start of a message and which Decred network the message applies to. This package provides the following constants: As discussed in the Decred message overview section, this package reads and writes Decred messages using a generic interface named Message. In order to determine the actual concrete type of the message, use a type switch or type assertion. An example of a type switch follows: In order to unmarshall Decred messages from the wire, use the ReadMessage function. It accepts any io.Reader, but typically this will be a net.Conn to a remote node running a Decred peer. Example syntax is: In order to marshall Decred messages to the wire, use the WriteMessage function. It accepts any io.Writer, but typically this will be a net.Conn to a remote node running a Decred peer. Example syntax to request addresses from a remote peer is: The errors returned by this package are either the raw errors provided by underlying calls to read/write from streams such as io.EOF, io.ErrUnexpectedEOF, and io.ErrShortWrite, or of type wire.MessageError. This allows the caller to differentiate between general IO errors and malformed messages through type assertions. This package includes spec changes outlined by the following BIPs:
Package servicecatalog provides the API client, operations, and parameter types for AWS Service Catalog. Service Catalogenables organizations to create and manage catalogs of IT services that are approved for Amazon Web Services. To get the most out of this documentation, you should be familiar with the terminology discussed in Service Catalog Concepts.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package comprehend provides the API client, operations, and parameter types for Amazon Comprehend. Amazon Comprehend is an Amazon Web Services service for gaining insight into the content of documents. Use these actions to determine the topics contained in your documents, the topics they discuss, the predominant sentiment expressed in them, the predominant language used, and more.
Package hdkeychain provides an API for Decred hierarchical deterministic extended keys (based on BIP0032). The ability to implement hierarchical deterministic wallets depends on the ability to create and derive hierarchical deterministic extended keys. At a high level, this package provides support for those hierarchical deterministic extended keys by providing an ExtendedKey type and supporting functions. Each extended key can either be a private or public extended key which itself is capable of deriving a child extended key. Whether an extended key is a private or public extended key can be determined with the IsPrivate function. In order to create and sign transactions, or provide others with addresses to send funds to, the underlying key and address material must be accessible. This package provides the ECPubKey, ECPrivKey, and Address functions for this purpose. As previously mentioned, the extended keys are hierarchical meaning they are used to form a tree. The root of that tree is called the master node and this package provides the NewMaster function to create it from a cryptographically random seed. The GenerateSeed function is provided as a convenient way to create a random seed for use with the NewMaster function. Once you have created a tree root (or have deserialized an extended key as discussed later), the child extended keys can be derived by using the Child function. The Child function supports deriving both normal (non-hardened) and hardened child extended keys. In order to derive a hardened extended key, use the HardenedKeyStart constant + the hardened key number as the index to the Child function. This provides the ability to cascade the keys into a tree and hence generate the hierarchical deterministic key chains. A private extended key can be used to derive both hardened and non-hardened (normal) child private and public extended keys. A public extended key can only be used to derive non-hardened child public extended keys. As enumerated in BIP0032 "knowledge of the extended public key plus any non-hardened private key descending from it is equivalent to knowing the extended private key (and thus every private and public key descending from it). This means that extended public keys must be treated more carefully than regular public keys. It is also the reason for the existence of hardened keys, and why they are used for the account level in the tree. This way, a leak of an account-specific (or below) private key never risks compromising the master or other accounts." A private extended key can be converted to a new instance of the corresponding public extended key with the Neuter function. The original extended key is not modified. A public extended key is still capable of deriving non-hardened child public extended keys. Extended keys are serialized and deserialized with the String and NewKeyFromString functions. The serialized key is a Base58-encoded string which looks like the following: Extended keys are much like normal Decred addresses in that they have version bytes which tie them to a specific network. The SetNet and IsForNet functions are provided to set and determinine which network an extended key is associated with. This example demonstrates the audits use case in BIP0032. This example demonstrates the default hierarchical deterministic wallet layout as described in BIP0032. This example demonstrates how to generate a cryptographically random seed then use it to create a new master node (extended key).
Package dcrjson provides primitives for working with the Decred JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives to ease this process. In addition, it also provides some additional features such as custom command registration, command categorization, and reflection-based help generation. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with asynchronous transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. Based upon the discussion above, it should be easy to see how the types of this package map into the required parts of the protocol To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications, as well as the provided expected result types, to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package sqlite is a sql/database driver using a CGo-free port of the C SQLite3 library. SQLite is an in-process implementation of a self-contained, serverless, zero-configuration, transactional SQL database engine. When you import this package you should use in your go.mod file the exact same version of modernc.org/libc as seen in the go.mod file of this repository. See the discussion at https://gitlab.com/cznic/sqlite/-/issues/177 for more details. This project is sponsored by Schleibinger Geräte Teubert u. Greim GmbH by allowing one of the maintainers to work on it also in office hours. These combinations of GOOS and GOARCH are currently supported Builder results available at: https://modern-c.appspot.com/-/builder/?importpath=modernc.org%2fsqlite 2024-11-16 v1.34.0: Implement ResetSession and IsValid methods in connection 2024-07-22 v1.31.0: Support windows/386. 2024-06-04 v1.30.0: Upgrade to SQLite 3.46.0, release notes at https://sqlite.org/releaselog/3_46_0.html. 2024-02-13 v1.29.0: Upgrade to SQLite 3.45.1, release notes at https://sqlite.org/releaselog/3_45_1.html. 2023-12-14: v1.28.0: Add (*Driver).RegisterConnectionHook, ConnectionHookFn, ExecQuerierContext, RegisterConnectionHook. 2023-08-03 v1.25.0: enable SQLITE_ENABLE_DBSTAT_VTAB. 2023-07-11 v1.24.0: Add (*conn).{Serialize,Deserialize,NewBackup,NewRestore} methods, add Backup type. 2023-06-01 v1.23.0: Allow registering aggregate functions. 2023-04-22 v1.22.0: Support linux/s390x. 2023-02-23 v1.21.0: Upgrade to SQLite 3.41.0, release notes at https://sqlite.org/releaselog/3_41_0.html. 2022-11-28 v1.20.0: Support linux/ppc64le. 2022-09-16 v1.19.0: Support frebsd/arm64. 2022-07-26 v1.18.0: Add support for Go fs.FS based SQLite virtual filesystems, see function New in modernc.org/sqlite/vfs and/or TestVFS in all_test.go 2022-04-24 v1.17.0: Support windows/arm64. 2022-04-04 v1.16.0: Support scalar application defined functions written in Go. See https://www.sqlite.org/appfunc.html 2022-03-13 v1.15.0: Support linux/riscv64. 2021-11-13 v1.14.0: Support windows/amd64. This target had previously only experimental status because of a now resolved memory leak. 2021-09-07 v1.13.0: Support freebsd/amd64. 2021-06-23 v1.11.0: Upgrade to use sqlite 3.36.0, release notes at https://www.sqlite.org/releaselog/3_36_0.html. 2021-05-06 v1.10.6: Fixes a memory corruption issue (https://gitlab.com/cznic/sqlite/-/issues/53). Versions since v1.8.6 were affected and should be updated to v1.10.6. 2021-03-14 v1.10.0: Update to use sqlite 3.35.0, release notes at https://www.sqlite.org/releaselog/3_35_0.html. 2021-03-11 v1.9.0: Support darwin/arm64. 2021-01-08 v1.8.0: Support darwin/amd64. 2020-09-13 v1.7.0: Support linux/arm and linux/arm64. 2020-09-08 v1.6.0: Support linux/386. 2020-09-03 v1.5.0: This project is now completely CGo-free, including the Tcl tests. 2020-08-26 v1.4.0: First stable release for linux/amd64. The database/sql driver and its tests are CGo free. Tests of the translated sqlite3.c library still require CGo. 2020-07-26 v1.4.0-beta1: The project has reached beta status while supporting linux/amd64 only at the moment. The 'extraquick' Tcl testsuite reports 2019-12-28 v1.2.0-alpha.3: Third alpha fixes issue #19. 2019-12-26 v1.1.0-alpha.2: Second alpha release adds support for accessing a database concurrently by multiple goroutines and/or processes. v1.1.0 is now considered feature-complete. Next planed release should be a beta with a proper test suite. 2019-12-18 v1.1.0-alpha.1: First alpha release using the new cc/v3, gocc, qbe toolchain. Some primitive tests pass on linux_{amd64,386}. Not yet safe for concurrent access by multiple goroutines. Next alpha release is planed to arrive before the end of this year. 2017-06-10: Windows/Intel no more uses the VM (thanks Steffen Butzer). 2017-06-05 Linux/Intel no more uses the VM (cznic/virtual). To access a Sqlite database do something like A comma separated list of options can be passed to `go generate` via the environment variable GO_GENERATE. Some useful options include for example: To create a debug/development version, issue for example: Note: To run `go generate` you need to have modernc.org/ccgo/v3 installed. This is an example of how to use the debug logs in modernc.org/libc when hunting a bug. The /tmp/libc.log file is created as requested. No useful messages there because none are enabled in libc. Let's try to enable Xwrite as an example. We need to tell the Go build system to use our local, patched/debug libc: And run the test again: See https://sqlite.org/docs.html
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package rpcclient implements a websocket-enabled Decred JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Decred RPC server that uses a mostly btcd/bitcoin core style Decred JSON-RPC API. This client has been tested with dcrd (https://github.com/decred/dcrd) and dcrwallet (https://github.com/decred/dcrwallet). In addition to the compatible standard HTTP POST JSON-RPC API, dcrd and dcrwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to dcrd or dcrwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by dcrd and dcrwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by dcrd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by dcrd (and dcrwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that dcrd intentionally separates the wallet functionality into a separate process named dcrwallet. This means if you are connected to the dcrd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, dcrwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *dcrjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package levigo provides the ability to create and access LevelDB databases. levigo.Open opens and creates databases. The DB struct returned by Open provides DB.Get, DB.Put and DB.Delete to modify and query the database. For bulk reads, use an Iterator. If you want to avoid disturbing your live traffic while doing the bulk read, be sure to call SetFillCache(false) on the ReadOptions you use when creating the Iterator. Batched, atomic writes can be performed with a WriteBatch and DB.Write. If your working dataset does not fit in memory, you'll want to add a bloom filter to your database. NewBloomFilter and Options.SetFilterPolicy is what you want. NewBloomFilter is amount of bits in the filter to use per key in your database. If you're using a custom comparator in your code, be aware you may have to make your own filter policy object. This documentation is not a complete discussion of LevelDB. Please read the LevelDB documentation <http://code.google.com/p/leveldb> for information on its operation. You'll find lots of goodies there.
Package hdkeychain provides an API for Decred hierarchical deterministic extended keys (based on BIP0032). The ability to implement hierarchical deterministic wallets depends on the ability to create and derive hierarchical deterministic extended keys. At a high level, this package provides support for those hierarchical deterministic extended keys by providing an ExtendedKey type and supporting functions. Each extended key can either be a private or public extended key which itself is capable of deriving a child extended key. Whether an extended key is a private or public extended key can be determined with the IsPrivate function. In order to create and sign transactions, or provide others with addresses to send funds to, the underlying key and address material must be accessible. This package provides the SerializedPubKey and SerializedPrivKey functions for this purpose. The caller may then create the desired address types. As previously mentioned, the extended keys are hierarchical meaning they are used to form a tree. The root of that tree is called the master node and this package provides the NewMaster function to create it from a cryptographically random seed. The GenerateSeed function is provided as a convenient way to create a random seed for use with the NewMaster function. Once you have created a tree root (or have deserialized an extended key as discussed later), the child extended keys can be derived by using either the Child or ChildBIP32Std function. The difference is described in the following section. These functions support deriving both normal (non-hardened) and hardened child extended keys. In order to derive a hardened extended key, use the HardenedKeyStart constant + the hardened key number as the index to the Child function. This provides the ability to cascade the keys into a tree and hence generate the hierarchical deterministic key chains. The Child function derives extended keys with a modified scheme based on BIP0032, whereas ChildBIP32Std produces keys that strictly conform to the standard. Specifically, the Decred variation strips leading zeros of a private key, causing subsequent child keys to differ from the keys expected by standard BIP0032. The ChildBIP32Std method retains leading zeros, ensuring the child keys expected by BIP0032 are derived. The Child function must be used for Decred wallet key derivation for legacy reasons. A private extended key can be used to derive both hardened and non-hardened (normal) child private and public extended keys. A public extended key can only be used to derive non-hardened child public extended keys. As enumerated in BIP0032 "knowledge of the extended public key plus any non-hardened private key descending from it is equivalent to knowing the extended private key (and thus every private and public key descending from it). This means that extended public keys must be treated more carefully than regular public keys. It is also the reason for the existence of hardened keys, and why they are used for the account level in the tree. This way, a leak of an account-specific (or below) private key never risks compromising the master or other accounts." A private extended key can be converted to a new instance of the corresponding public extended key with the Neuter function. The original extended key is not modified. A public extended key is still capable of deriving non-hardened child public extended keys. Extended keys are serialized and deserialized with the String and NewKeyFromString functions. The serialized key is a Base58-encoded string which looks like the following: Extended keys are much like normal Decred addresses in that they have version bytes which tie them to a specific network. The network that an extended key is associated with is specified when creating and decoding the key. In the case of decoding, an error will be returned if a given encoded extended key is not for the specified network. This example demonstrates the audits use case in BIP0032. This example demonstrates the default hierarchical deterministic wallet layout as described in BIP0032. This example demonstrates how to generate a cryptographically random seed then use it to create a new master node (extended key).
Package types implements concrete types for marshalling to and from the dcrd JSON-RPC commands, return values, and notifications. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://pkg.go.dev/github.com/decred/dcrd/dcrjson/v3). The types in this package map to the required parts of the protocol as discussed in the dcrjson documentation To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package hdkeychain provides an API for Decred hierarchical deterministic extended keys (based on BIP0032). The ability to implement hierarchical deterministic wallets depends on the ability to create and derive hierarchical deterministic extended keys. At a high level, this package provides support for those hierarchical deterministic extended keys by providing an ExtendedKey type and supporting functions. Each extended key can either be a private or public extended key which itself is capable of deriving a child extended key. Whether an extended key is a private or public extended key can be determined with the IsPrivate function. In order to create and sign transactions, or provide others with addresses to send funds to, the underlying key and address material must be accessible. This package provides the ECPubKey and ECPrivKey functions for this purpose. The caller may then create the desired address types. As previously mentioned, the extended keys are hierarchical meaning they are used to form a tree. The root of that tree is called the master node and this package provides the NewMaster function to create it from a cryptographically random seed. The GenerateSeed function is provided as a convenient way to create a random seed for use with the NewMaster function. Once you have created a tree root (or have deserialized an extended key as discussed later), the child extended keys can be derived by using the Child function. The Child function supports deriving both normal (non-hardened) and hardened child extended keys. In order to derive a hardened extended key, use the HardenedKeyStart constant + the hardened key number as the index to the Child function. This provides the ability to cascade the keys into a tree and hence generate the hierarchical deterministic key chains. A private extended key can be used to derive both hardened and non-hardened (normal) child private and public extended keys. A public extended key can only be used to derive non-hardened child public extended keys. As enumerated in BIP0032 "knowledge of the extended public key plus any non-hardened private key descending from it is equivalent to knowing the extended private key (and thus every private and public key descending from it). This means that extended public keys must be treated more carefully than regular public keys. It is also the reason for the existence of hardened keys, and why they are used for the account level in the tree. This way, a leak of an account-specific (or below) private key never risks compromising the master or other accounts." A private extended key can be converted to a new instance of the corresponding public extended key with the Neuter function. The original extended key is not modified. A public extended key is still capable of deriving non-hardened child public extended keys. Extended keys are serialized and deserialized with the String and NewKeyFromString functions. The serialized key is a Base58-encoded string which looks like the following: Extended keys are much like normal Decred addresses in that they have version bytes which tie them to a specific network. The network that an extended key is associated with is specified when creating and decoding the key. In the case of decoding, an error will be returned if a given encoded extended key is not for the specified network. This example demonstrates the audits use case in BIP0032. This example demonstrates the default hierarchical deterministic wallet layout as described in BIP0032. This example demonstrates how to generate a cryptographically random seed then use it to create a new master node (extended key).
Package apd implements arbitrary-precision decimals. apd implements much of the decimal specification from the General Decimal Arithmetic (http://speleotrove.com/decimal/) description, which is refered to here as GDA. This is the same specification implemented by pythons decimal module (https://docs.python.org/2/library/decimal.html) and GCCs decimal extension. Panic-free operation. The math/big types don’t return errors, and instead panic under some conditions that are documented. This requires users to validate the inputs before using them. Meanwhile, we’d like our decimal operations to have more failure modes and more input requirements than the math/big types, so using that API would be difficult. apd instead returns errors when needed. Support for standard functions. sqrt, ln, pow, etc. Accurate and configurable precision. Operations will use enough internal precision to produce a correct result at the requested precision. Precision is set by a "context" structure that accompanies the function arguments, as discussed in the next section. Good performance. Operations will either be fast enough or will produce an error if they will be slow. This prevents edge-case operations from consuming lots of CPU or memory. Condition flags and traps. All operations will report whether their result is exact, is rounded, is over- or under-flowed, is subnormal (https://en.wikipedia.org/wiki/Denormal_number), or is some other condition. apd supports traps which will trigger an error on any of these conditions. This makes it possible to guarantee exactness in computations, if needed. SQL scan and value methods are implemented. This allows the use of Decimals as placeholder parameters and row result Scan destinations. apd has two main types. The first is Decimal which holds the values of decimals. It is simple and uses a big.Int with an exponent to describe values. Most operations on Decimals can’t produce errors as they work directly on the underlying big.Int. Notably, however, there are no arithmetic operations on Decimals. The second main type is Context, which is where all arithmetic operations are defined. A Context describes the precision, range, and some other restrictions during operations. These operations can all produce failures, and so return errors. Context operations, in addition to errors, return a Condition, which is a bitfield of flags that occurred during an operation. These include overflow, underflow, inexact, rounded, and others. The Traps field of a Context can be set which will produce an error if the corresponding flag occurs. An example of this is given below.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package apd implements arbitrary-precision decimals. apd implements much of the decimal specification from the General Decimal Arithmetic (http://speleotrove.com/decimal/) description, which is refered to here as GDA. This is the same specification implemented by pythons decimal module (https://docs.python.org/2/library/decimal.html) and GCCs decimal extension. Panic-free operation. The math/big types don’t return errors, and instead panic under some conditions that are documented. This requires users to validate the inputs before using them. Meanwhile, we’d like our decimal operations to have more failure modes and more input requirements than the math/big types, so using that API would be difficult. apd instead returns errors when needed. Support for standard functions. sqrt, ln, pow, etc. Accurate and configurable precision. Operations will use enough internal precision to produce a correct result at the requested precision. Precision is set by a "context" structure that accompanies the function arguments, as discussed in the next section. Good performance. Operations will either be fast enough or will produce an error if they will be slow. This prevents edge-case operations from consuming lots of CPU or memory. Condition flags and traps. All operations will report whether their result is exact, is rounded, is over- or under-flowed, is subnormal (https://en.wikipedia.org/wiki/Denormal_number), or is some other condition. apd supports traps which will trigger an error on any of these conditions. This makes it possible to guarantee exactness in computations, if needed. SQL scan and value methods are implemented. This allows the use of Decimals as placeholder parameters and row result Scan destinations. apd has two main types. The first is Decimal which holds the values of decimals. It is simple and uses a big.Int with an exponent to describe values. Most operations on Decimals can’t produce errors as they work directly on the underlying big.Int. Notably, however, there are no arithmetic operations on Decimals. The second main type is Context, which is where all arithmetic operations are defined. A Context describes the precision, range, and some other restrictions during operations. These operations can all produce failures, and so return errors. Context operations, in addition to errors, return a Condition, which is a bitfield of flags that occurred during an operation. These include overflow, underflow, inexact, rounded, and others. The Traps field of a Context can be set which will produce an error if the corresponding flag occurs. An example of this is given below.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. PKCS#11 tokens are accessed via Context objects. Each Context connects to one token. Context objects are created by calling Configure or ConfigureFromFile. In the latter case, the file should contain a JSON representation of a Config. There is support for generating DSA, RSA and ECDSA keys. These keys can be found later using FindKeyPair. All three key types implement the crypto.Signer interface and the RSA keys also implement crypto.Decrypter. RSA keys obtained through FindKeyPair will need a type assertion to be used for decryption. Assert either crypto.Decrypter or SignerDecrypter, as you prefer. Symmetric keys can also be generated. These are found later using FindKey. See the documentation for SecretKey for further information. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. When a Context is created, a session is created and the user is logged in. This session remains open until the Context is closed, to ensure all object handles remain valid and to avoid repeatedly calling C_Login. 2. The Context also maintains a pool of read-write sessions. The pool expands dynamically as needed, but never beyond the maximum number of r/w sessions supported by the token (as reported by C_GetInfo). If other applications are using the token, a lower limit should be set in the Config. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. Sessions are returned to the pool afterwards and may be re-used. Behaviour of the pool can be tweaked via Config fields: - PoolWaitTimeout controls how long an operation can block waiting on a session from the pool. A zero value means there is no limit. Timeouts occur if the pool is fully used and additional operations are requested. - MaxSessions sets an upper bound on the number of sessions. If this value is zero, a default maximum is used (see DefaultMaxSessions). In every case the maximum supported sessions as reported by the token is obeyed. The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package dcrjson provides primitives for working with the Decred JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives to ease this process. In addition, it also provides some additional features such as custom command registration, command categorization, and reflection-based help generation. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with asynchronous transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. Based upon the discussion above, it should be easy to see how the types of this package map into the required parts of the protocol To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications, as well as the provided expected result types, to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: net/http valyala/fasthttp Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorKind field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package rpcclient implements a websocket-enabled Decred JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Decred RPC server that uses a mostly btcd/bitcoin core style Decred JSON-RPC API. This client has been tested with dcrd (https://github.com/decred/dcrd) and dcrwallet (https://github.com/decred/dcrwallet). In addition to the compatible standard HTTP POST JSON-RPC API, dcrd and dcrwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to dcrd or dcrwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by dcrd and dcrwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by dcrd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by dcrd (and dcrwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that dcrd intentionally separates the wallet functionality into a separate process named dcrwallet. This means if you are connected to the dcrd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, dcrwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *dcrjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package grocksdb provides the ability to create and access RocksDB databases. grocksdb.OpenDb opens and creates databases. The DB struct returned by OpenDb provides DB.Get, DB.Put, DB.Merge and DB.Delete to modify and query the database. For bulk reads, use an Iterator. If you want to avoid disturbing your live traffic while doing the bulk read, be sure to call SetFillCache(false) on the ReadOptions you use when creating the Iterator. Batched, atomic writes can be performed with a WriteBatch and DB.Write. If your working dataset does not fit in memory, you'll want to add a bloom filter to your database. NewBloomFilter and BlockBasedTableOptions.SetFilterPolicy is what you want. NewBloomFilter is amount of bits in the filter to use per key in your database. If you're using a custom comparator in your code, be aware you may have to make your own filter policy object. This documentation is not a complete discussion of RocksDB. Please read the RocksDB documentation <http://rocksdb.org/> for information on its operation. You'll find lots of goodies there.
Package types implements concrete types for marshalling to and from the dcrd JSON-RPC commands, return values, and notifications. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://pkg.go.dev/github.com/decred/dcrd/dcrjson/v4). The types in this package map to the required parts of the protocol as discussed in the dcrjson documentation To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.