Package metrics provides minimalist instrumentation for your applications in the form of counters and gauges. A counter is a monotonically-increasing, unsigned, 64-bit integer used to represent the number of times an event has occurred. By tracking the deltas between measurements of a counter over intervals of time, an aggregation layer can derive rates, acceleration, etc. A gauge returns instantaneous measurements of something using signed, 64-bit integers. This value does not need to be monotonic. A histogram tracks the distribution of a stream of values (e.g. the number of milliseconds it takes to handle requests), adding gauges for the values at meaningful quantiles: 50th, 75th, 90th, 95th, 99th, 99.9th. Measurements from counters and gauges are available as expvars. Your service should return its expvars from an HTTP endpoint (i.e., /debug/vars) as a JSON object.
Package hamming distance calculations in Go https://github.com/steakknife/hamming For functions named CountBits.+s?. The plural forms are for slices. The CountBits.+ forms are Population Count only, where the bare-type forms are Hamming distance (number of bits different) between two values. Optimized assembly .+PopCnt forms are available on amd64, and operate just like the regular forms (Must check and guard on HasPopCnt() first before trying to call .+PopCnt functions). Got rune? use int32 Got uint8? use byte https://github.com/steakknife/hamming https://github.com/steakknife/hamming https://github.com/steakknife/hamming https://github.com/steakknife/hamming https://github.com/steakknife/hamming https://github.com/steakknife/hamming MIT license
Package envparse is a minimal environment variable parser. It handles empty lines, comments, single quotes, double quotes, and JSON escape sequences. Non-empty or comment lines should be of the form: While extraneous characters are discouraged, an "export" prefix, preceding whitespace, and trailing whitespace are all removed:
Package hamt provides a reference implementation of the IPLD HAMT used in the Filecoin blockchain. It includes some optional flexibility such that it may be used for other purposes outside of Filecoin. HAMT is a "hash array mapped trie" https://en.wikipedia.org/wiki/Hash_array_mapped_trie. This implementation extends the standard form by including buckets for the key/value pairs at storage leaves and CHAMP mutation semantics https://michael.steindorfer.name/publications/oopsla15.pdf. The CHAMP invariant and mutation rules provide us with the ability to maintain canonical forms given any set of keys and their values, regardless of insertion order and intermediate data insertion and deletion. Therefore, for any given set of keys and their values, a HAMT using the same parameters and CHAMP semantics, the root node should always produce the same content identifier (CID). The HAMT algorithm hashes incoming keys and uses incrementing subsections of that hash digest at each level of its tree structure to determine the placement of either the entry or a link to a child node of the tree. A `bitWidth` determines the number of bits of the hash to use for index calculation at each level of the tree such that the root node takes the first `bitWidth` bits of the hash to calculate an index and as we move lower in the tree, we move along the hash by `depth x bitWidth` bits. In this way, a sufficiently randomizing hash function will generate a hash that provides a new index at each level of the data structure. An index comprising `bitWidth` bits will generate index values of `[ 0, 2^bitWidth )`. So a `bitWidth` of 8 will generate indexes of 0 to 255 inclusive. Each node in the tree can therefore hold up to `2^bitWidth` elements of data, which we store in an array. In the this HAMT and the IPLD HashMap we store entries in buckets. A `Set(key, value)` mutation where the index generated at the root node for the hash of key denotes an array index that does not yet contain an entry, we create a new bucket and insert the key / value pair entry. In this way, a single node can theoretically hold up to `2^bitWidth x bucketSize` entries, where `bucketSize` is the maximum number of elements a bucket is allowed to contain ("collisions"). In practice, indexes do not distribute with perfect randomness so this maximum is theoretical. Entries stored in the node's buckets are stored in key-sorted order. This HAMT implementation: ⢠Fixes the `bucketSize` to 3. ⢠Defaults the `bitWidth` to 8, however within Filecoin it uses 5 ⢠Defaults the hash algorithm to the 64-bit variant of Murmur3-x64 The algorithm used here is identical to that of the IPLD HashMap algorithm specified at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md. The specific parameters used by Filecoin and the DAG-CBOR block layout differ from the specification and are defined at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md#Appendix-Filecoin-hamt-variant.
Package hamt provides a reference implementation of the IPLD HAMT used in the Filecoin blockchain. It includes some optional flexibility such that it may be used for other purposes outside of Filecoin. HAMT is a "hash array mapped trie" https://en.wikipedia.org/wiki/Hash_array_mapped_trie. This implementation extends the standard form by including buckets for the key/value pairs at storage leaves and CHAMP mutation semantics https://michael.steindorfer.name/publications/oopsla15.pdf. The CHAMP invariant and mutation rules provide us with the ability to maintain canonical forms given any set of keys and their values, regardless of insertion order and intermediate data insertion and deletion. Therefore, for any given set of keys and their values, a HAMT using the same parameters and CHAMP semantics, the root node should always produce the same content identifier (CID). The HAMT algorithm hashes incoming keys and uses incrementing subsections of that hash digest at each level of its tree structure to determine the placement of either the entry or a link to a child node of the tree. A `bitWidth` determines the number of bits of the hash to use for index calculation at each level of the tree such that the root node takes the first `bitWidth` bits of the hash to calculate an index and as we move lower in the tree, we move along the hash by `depth x bitWidth` bits. In this way, a sufficiently randomizing hash function will generate a hash that provides a new index at each level of the data structure. An index comprising `bitWidth` bits will generate index values of `[ 0, 2^bitWidth )`. So a `bitWidth` of 8 will generate indexes of 0 to 255 inclusive. Each node in the tree can therefore hold up to `2^bitWidth` elements of data, which we store in an array. In the this HAMT and the IPLD HashMap we store entries in buckets. A `Set(key, value)` mutation where the index generated at the root node for the hash of key denotes an array index that does not yet contain an entry, we create a new bucket and insert the key / value pair entry. In this way, a single node can theoretically hold up to `2^bitWidth x bucketSize` entries, where `bucketSize` is the maximum number of elements a bucket is allowed to contain ("collisions"). In practice, indexes do not distribute with perfect randomness so this maximum is theoretical. Entries stored in the node's buckets are stored in key-sorted order. This HAMT implementation: ⢠Fixes the `bucketSize` to 3. ⢠Defaults the `bitWidth` to 8, however within Filecoin it uses 5 ⢠Defaults the hash algorithm to the 64-bit variant of Murmur3-x64 The algorithm used here is identical to that of the IPLD HashMap algorithm specified at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md. The specific parameters used by Filecoin and the DAG-CBOR block layout differ from the specification and are defined at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md#Appendix-Filecoin-hamt-variant.
Package pango is a golang cross version mechanism for interacting with Palo Alto Networks devices (including physical and virtualized Next-generation Firewalls and Panorama). Versioning support is in place for PAN-OS 6.1 and up. To start, create a client connection with the desired parameters and then initialize the connection: Initializing the connection creates the API key (if it was not already specified), then performs "show system info" to get the PAN-OS version. Once the firewall client is created, you can query and configure the Palo Alto Networks device from the functions inside the various namespaces of the client connection. Namespaces correspond to the various configuration areas available in the GUI. For example: Generally speaking, there are the following functions inside each namespace: These functions correspond with PAN-OS Get, Show, Set, Edit, and Delete API calls. Get(), Set(), and Edit() take and return normalized, version independent objects. These version safe objects are typically named Entry, which corresponds to how the object is placed in the PAN-OS XPATH. Some Entry objects have a special function, Defaults(). Invoking this function will initialize the object with some default values. Each Entry that implements Defaults() calls out in its documentation what parameters are affected by this, and what the defaults are. For any version safe object, attempting to configure a parameter that your PAN-OS doesn't support will be safely ignored in the resultant XML sent to the firewall / Panorama. A PAN-OS configuration can be loaded from a PAN-OS device using `RetrievePanosConfig()` to pull it from a live device or `LoadPanosConfig()` if already in local memory. Once it's been loaded, use `FromPanosConfig()` for singletons and `AllFromPanosConfig()` for slices of normalized objects from the loaded config. You can also use this file load and config retrieval to do offline inspection of the config, just make sure to set `pango.Client.Version` to the appropriate PAN-OS version so the version normalization can take place. The PAN-OS XML API Edit command can be used to both create as well as update existing config, however it can also truncate config for the given XPATH. Due to this, if you want to use Edit(), you need to make sure that you perform either a Get() or a Show() first, make your modification, then invoke Edit() using that object. If you don't do this, you will truncate any sub config. To learn more about PAN-OS XML API, please refer to the Palo Alto Netowrks API documentation. Functions such as `panos.Client.Set`, `panos.Client.Edit`, and `panos.Client.Delete` take a parameter named `path`. This path can be either a fully formed XPATH as a string or a list of strings such as `[]string{"config", "shared", "address"}`. The grand majority of namespaces give their paths as a list of strings, as the XPATH oftentimes needs to be tweaked depending on SET vs EDIT, single objects vs multiple objects, etc, so handling path updates is easier this way. Example_createAddressGroup is a Panorama example on how to create/delete a security policy with the associated address group and addresses ExampleCreateInterface demonstrates how to use pango to create an interface if the interface is not already configured. ExamplePanosInfo outputs various info about a PAN-OS device as JSON.
Golang Gonic/Gin startup project fork form RealWorld https://realworld.io This project will include objects and relationships' CRUD, you will know how to write a golang/gin app though small perfectly formed.
Package tmplfunc provides an extension of Go templates in which templates can be invoked as if they were functions. For example, after parsing this package installs a function named link allowing the template to be invoked as instead of the longer-form (assuming an appropriate function named dict) The function installed for a given template depends on the name of the defined template, which can include not just a function name but also a list of parameter names. The function name and parameter names must consist only of letters, digits, and underscores, with a leading non-digit. If there is no parameter list, then the function is expected to take at most one argument, made available in the template body as ā.ā (dot). If such a function is called with no arguments, dot will be a nil interface value. If there is a parameter list, then the function requires an argument for each parameter, except for optional and variadic parameters, explained below. Inside the template, the top-level value ā.ā is a map[string]interface{} in which each parameter name is mapped to the corresponding argument value. A parameter x can therefore be accessed as {{(index . "x")}} or, more concisely, {{.x}}. The first special case in parameter handling is that a parameter can be made optional by adding a ā?ā suffix after its name. If the argument list ends before that parameter, the corresponding map entry will be present and set to a nil value. The second special case is that a parameter can be made variadic by adding a ā...ā suffix after its name. The corresponding map entry contains a []interface{} holding the zero or more arguments corresponding to that parameter. In the parameter list, required parameters must precede optional parameters, which must in turn precede any variadic parameter. For example, we can revise the link template given earlier to make the link text optional, substituting the URL when the text is omitted: This package is meant to be used with templates from either the text/template or html/template packages. Given a *template.Template variable t, substitute: Parse, ParseFiles, and ParseGlob parse the new templates but also add functions that invoke them, named according to the function signatures. Templates can only invoke functions for templates that have already been defined or that are being defined in the same Parse, ParseFiles, or ParseGlob call. For example, templates in two files x.tmpl and y.tmpl can call each other only if ParseFiles or ParseGlob is used to parse both files in a single call. Otherwise, the parsing of the first file will report that calls to templates in the second file are calling unknown functions. When used with the html/template package, all function-invoked template calls are treated as invoking templates producing HTML. In order to use a template that produces some other kind of text fragment, the template must be invoked directly using the {{template "name"}} form, not as a function call.
Package gormstore is a GORM backend for gorilla sessions Simplest form: All options: If you want periodic cleanup of expired sessions: For more information about the keys see https://github.com/gorilla/securecookie For API to use in HTTP handlers see https://github.com/gorilla/sessions
Package conf package provides tools for easily loading program configurations from multiple sources such as the command line arguments, environment, or a configuration file. Most applications only need to use the Load function to get their settings loaded into an object. By default, Load will read from a configurable file defined by the -config-file command line argument, load values present in the environment, and finally load the program arguments. The object in which the configuration is loaded must be a struct, the names and types of its fields are introspected by the Load function to understand how to load the configuration. The name deduction from the struct field obeys the same rules than those implemented by the standard encoding/json package, which means the program can set the "conf" tag to override the default field names in the command line arguments and configuration file. A "help" tag may also be set on the fields of the configuration object to add documentation to the setting, which will be shown when the program is asked to print its help. When values are loaded from the environment the Load function looks for variables matching the struct fields names in snake-upper-case form.
Package math32 provides basic constants and mathematical functions for float32 types. At its core, it's mostly just a wrapper in form of float32(math.XXX). This applies to the following functions: Everything else is a float32 implementation. Implementation schedule is sporadic an uncertain. But eventually all functions will be replaced
Package ccgo translates C to Go source code. This v3 package is obsolete. Please use current ccgo/v4: Invocation 2021-12-23: v3.13.0 add clang support. To compile the resulting Go programs the package modernc.org/libc has to be installed. CCGO_CPP selects which command is used by the C front end to obtain target configuration. Defaults to `cpp`. Ignored when --load-config <path> is used. TARGET_GOARCH selects the GOARCH of the resulting Go code. Defaults to $GOARCH or runtime.GOARCH if $GOARCH is not set. Ignored when --load-config <path> is used. TARGET_GOOS selects the GOOS of the resulting Go code. Defaults to $GOOS or runtime.GOOS if $GOOS is not set. Ignored when --load-config <path> is used. To compile for the host invoke something like To cross compile set TARGET_GOARCH and/or TARGET_GOOS, not GOARCH/GOOS. Cross compile depends on availability of C stdlib headers for the target platform as well on the set of predefined macros for the target platform. For example, to cross compile on a Linux host, targeting windows/amd64, it's necessary to have mingw64 installed in $PATH. Then invoke something like Only files with extension .c, .h or .json are recognized as input files. A .json file is interpreted as a compile database. All other command line arguments following the .json file are interpreted as items that should be found in the database and included in the output file. Each item should be on object file (.o) or static archive (.a) or a command (no extension). Command line options requiring an argument. -Dfoo Equals `#define foo 1`. -Dfoo=bar Equals `#define foo bar`. -Ipath Add path to the list of include files search path. The option is a capital letter I (India), not a lowercase letter l (Lima). -limport-path The package at <import-path> must have been produced without using the -nocapi option, ie. the package must have a proper capi_$GOOS_$GOARCH.go file. The option is a lowercase letter l (Lima), not a capital letter I (India). -Ufoo Equals `#undef foo`. -compiledb name When this option appears anywhere, most preceding options are ignored and all following command line arguments are interpreted as a command with arguments that will be executed to produce the compilation database. For example: This will execute `make -DFOO -w` and attempts to extract the compile and archive commands. Only POSIX operating systems are supported. The supported build system must output information about entering directories that is compatible with GNU make. The only compilers supported are `gcc` and `clang`. The only archiver supported is `ar`. Format specification: https://clang.llvm.org/docs/JSONCompilationDatabase.html Note: This option produces also information about libraries created with `ar cr` and include it in the json file, which is above the specification. -crt-import-path path Unless disabled by the -nostdlib option, every produced Go file imports the C runtime library. Default is `modernc.org/libc`. -export-defines "" Export C numeric/string defines as Go constants by capitalizing the first letter of the define's name. -export-defines prefix Export C numeric/string defines as Go constants by prefixing the define's name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-enums "" Export C enum constants as Go constants by capitalizing the first letter of the enum constant name. -export-enums prefix Export C enum constants as Go constants by prefixing the enum constant name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-externs "" Export C extern definitions as Go definitions by capitalizing the first letter of the definition name. -export-externs prefix Export C extern definitions as Go definitions by prefixing the definition name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-fields "" Export C struct fields as Go fields by capitalizing the first letter of the field name. -export-fields prefix Export C struct fields as Go fields by prefixing the field name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-structs "" Export tagged C struct/union types as Go types by capitalizing the first letter of the tag name. -export-structs prefix Export tagged C struct/union types as Go types by prefixing the tag name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-typedefs "" Export C typedefs as Go types by capitalizing the first letter of the typedef name. -export-structs prefix Export C typedefs as as Go types by prefixing the typedef name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -static-locals-prefix prefix Prefix C static local declarators names with 'prefix'. -host-config-cmd command This option has the same effect as setting `CCGO_CPP=command`. -host-config-opts comma-separated-list The separated items of the list are added to the invocation of the configuration command. -pkgname name Set the resulting Go package name to 'name'. Defaults to `main`. -script filename Ccgo does not yet have a concept of object files. All C files that are needed for producing the resulting Go file have to be compiled together and "linked" in memory. There are some problems with this approach, one of them is the situation when foo.c has to be compiled using, for example `-Dbar=42` and "linked" with baz.c that needs to be compiled with `-Dbar=314`. Or `bar` must not defined at all for baz.c, etc. A script in a named file is a CSV file. It is opened like this (error handling omitted): The first field of every record in the CSV file is the directory to use. The remaining fields are the arguments of the ccgo command. This way different C files can be translated using different options. The CSV file may look something like: -volatile comma-separated-list The separated items of the list are added to the list of file scope extern variables the will be accessed atomically, like if their C declarator used the 'volatile' type specifier. Currently only C scalar types of size 4 and 8 bytes are supported. Other types/sizes will ignore both the volatile specifier and the -volatile option. -save-config path This option copies every header included during compilation or compile database generation to a file under the path argument. Additionally the host configuration, ie. predefined macros, include search paths, os and architecture is stored in path/config.json. When this option is used, no Go code is generated, meaning no link phase occurs and thus the memory consumption should stay low. Passing an empty string as an argument of -save-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. This option is ignored when -compiledb <path> is used. --load-config path Note that this option must have the double dash prefix to distinguish it from -lfoo, the [traditional] short form of `-l foo`. This option configures the compiler using path/config.json. The include paths are adjusted to be relative to path. For example: Assume on machine A the default C preprocessor reports a system include search path "/usr/include". Running ccgo on A with -save-config /tmp/foo to compile foo.c that #includes <stdlib.h>, which is found in /usr/include/stdlib.h on the host results in Assume /tmp/foo from machine A will be recursively copied to machine B, that may run a different operating system and/or architecture. Let the copy be for example in /tmp/bar. Using --load-config /tmp/bar will instruct ccgo to configure its preprocessor with a system include path /tmp/bar/usr/include and thus use the original machine A stdlib.h found there. When the --load-config is used, no host configuration from a machine B cross C preprocessor/compiler is needed to transpile the foo.c source on machine B as if the compiler would be running on machine A. The particular usefulness of this mechanism is for transpiling big projects for 32 bit architectures. There the lack if ccgo having an object format and thus linking everything in RAM can need too much memory for the system to handle. The way around this is possibly to run something like on machine A, transfer path/* to machine B and run the link phase there with eg. Note that the C sources for the project must be in the same path on both machines because the compile database stores absolute paths. It might be convenient to put the sources in path/src, the config in path/config, for example, and transfer the [archive of] path/ to the same directory on the second machine. That also solves the issue when ./configure generates files and the result differs per operating system or architecture. Passing an empty string as an argument of -load-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. These command line options don't take arguments. -E When this option is present the compiler does not produce any Go files and instead prints the preprocessor output to stdout. -all-errors Normally only the first 10 or so errors are shown. With this option the compiler will show all errors. -header Using this option suppresses producing of any function definitions. This is possibly useful for producing Go files from C header files. Including function signatures with -header. -func-sig Add this option to include fucntion signature when compiling headers (using -header). -nostdinc This option disables the default C include search paths. -nostdlib This option disables importing of the runtime library by the resulting Go code. -trace-pinning This option will print the positions and names of local declarators that are being pinned. -version Ignore all other options, print version and exit. -verbose-compiledb Enable verbose output when -compiledb is present. -ignore-undefined This option tells the linker to not insist on finding definitions for declarators that are not implicitly declared and used - but not defined. This might be useful when the intent is to define the missing function in Go functions manually. Name conflict resolution for such declarator names may or may not be applied. -ignore-unsupported-alignment This option tells the compiler to not complain about alignments that Go cannot support. -trace-included-files This option outputs the path names of all included files. This option is ignored when -compiledb <path> is used. There may exist other options not listed above. Those should be considered temporary and/or unsupported and may be removed without notice. Alternatively, they may eventually get promoted to "documented" options.
Package asn1 implements encoding and decoding of ASN.1 data structures using both Basic Encoding Rules (BER) or its subset, the Distinguished Encoding Rules (BER). This package is highly inspired by the Go standard package "encoding/asn1" while supporting additional features such as BER encoding and decoding and ASN.1 CHOICE types. By default and for convenience the package uses DER for encoding and BER for decoding. However it's possible to use a Context object to set the desired encoding and decoding rules as well other options. Restrictions: - BER allows STRING types, such as OCTET STRING and BIT STRING, to be encoded as constructed types containing inner elements that should be concatenated to form the complete string. The package does not support that, but in the future decoding of constructed strings should be included.
Package goq was built to allow users to declaratively unmarshal HTML into go structs using struct tags composed of css selectors. I've made a best effort to behave very similarly to JSON and XML decoding as well as exposing as much information as possible in the event of an error to help you debug your Unmarshaling issues. When creating struct types to be unmarshaled into, the following general rules apply: - Any type that implements the Unmarshaler interface will be passed a slice of *html.Node so that manual unmarshaling may be done. This takes the highest precedence. - Any struct fields may be annotated with goquery metadata, which takes the form of an element selector followed by arbitrary comma-separated "value selectors." - A value selector may be one of `html`, `text`, or `[someAttrName]`. `html` and `text` will result in the methods of the same name being called on the `*goquery.Selection` to obtain the value. `[someAttrName]` will result in `*goquery.Selection.Attr("someAttrName")` being called for the value. - A primitive value type will default to the text value of the resulting nodes if no value selector is given. - At least one value selector is required for maps, to determine the map key. The key type must follow both the rules applicable to go map indexing, as well as these unmarshaling rules. The value of each key will be unmarshaled in the same way the element value is unmarshaled. - For maps, keys will be retreived from the *same level* of the DOM. The key selector may be arbitrarily nested, though. The first level of children with any number of matching elements will be used, though. - For maps, any values *must* be nested *below* the level of the key selector. Parents or siblings of the element matched by the key selector will not be considered. - Once used, a "value selector" will be shifted off of the comma-separated list. This allows you to nest arbitrary levels of value selectors. For example, the type `[]map[string][]string` would require one selector for the map key, and take an optional second selector for the values of the string slice. - Any struct type encountered in nested types (e.g. map[string]SomeStruct) will override any remaining "value selectors" that had not been used. For example, given: `[foo]` will be used to determine the string map key,but `[bar]` and `[baz]` will be ignored, with the `[bang]` tag present S struct type taking precedence.
shard.core is a full-node bitcoin implementation written in Go. The default options are sane for most users. This means shard.core will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when shard.core starts up. By default, the configuration file is located at ~/.shard.core/shard.core.yaml on POSIX-style operating systems and %LOCALAPPDATA%\shard.core\shard.core.yaml on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package amqp is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the sychronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.Publish or Channel.Consume. SSL/TLS - Secure connections When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate dial your own connection. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html This exports a Session object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and `rabbitmq-server` in another. Stop & restart RabbitMQ to see how the queue reacts.
Code generated by go-bindata. (@generated) DO NOT EDIT. sources: sample-bsvd.conf bsvd is a full-node Bitcoin (BSV) implementation written in Go. The default options are sane for most users. This means bsvd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when bsvd starts up. By default, the configuration file is located at ~/.bsvd/bsvd.conf on POSIX-style operating systems and %LOCALAPPDATA%\bsvd\bsvd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package mempool provides a policy-enforced pool of unmined Decred transactions. A key responsibility of the Decred network is mining transactions ā regular transactions and stake transactions ā into blocks. In order to facilitate this, the mining process relies on having a readily-available source of transactions to include in a block that is being solved. At a high level, this package satisfies that requirement by providing an in-memory pool of fully validated transactions that can also optionally be further filtered based upon a configurable policy. The Policy configuration options has flags that control whether or not "standard" transactions and old votes are accepted into the mempool. In essence, a "standard" transaction is one that satisfies a fairly strict set of requirements that are largely intended to help provide fair use of the system to all users. It is important to note that what is considered to be a "standard" transaction changes over time as policy and consensus rules evolve. For some insight, at the time of this writing, an example of _some_ of the criteria that are required for a transaction to be considered standard are that it is of the most-recently supported version, finalized, does not exceed a specific size, and only consists of specific script forms. Since this package does not deal with other Decred specifics such as network communication and transaction relay, it returns a list of transactions that were accepted which gives the caller a high level of flexibility in how they want to proceed. Typically, this will involve things such as relaying the transactions to other peers on the network and notifying the mining process that new transactions are available. This package has intentionally been designed so it can be used as a standalone package for any projects needing the ability create an in-memory pool of Decred transactions that are not only valid by consensus rules, but also adhere to a configurable policy ## Feature Overview The following is a quick overview of the major features. It is not intended to be an exhaustive list. - Maintain a pool of fully validated transactions - Stake transaction support (ticket purchases, votes and revocations) - Orphan transaction support (transactions that spend from unknown outputs) - Configurable transaction acceptance policy - Additional metadata tracking for each transaction - Manual control of transaction removal Errors returned by this package are either the raw errors provided by underlying calls or of type mempool.RuleError. Since there are two classes of rules (mempool acceptance rules and blockchain (consensus) acceptance rules), the mempool.RuleError type contains a single Err field which will, in turn, either be a mempool.TxRuleError or a blockchain.RuleError. The first indicates a violation of mempool acceptance rules while the latter indicates a violation of consensus acceptance rules. This allows the caller to easily differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by type asserting the Err field to one of the aforementioned types and examining their underlying ErrorCode field.
Package gorilla/schema fills a struct with form values. The basic usage is really simple. Given this struct: ...we can fill it passing a map to the Decode() function: This is just a simple example and it doesn't make a lot of sense to create the map manually. Typically it will come from a http.Request object and will be of type url.Values, http.Request.Form, or http.Request.MultipartForm: Note: it is a good idea to set a Decoder instance as a package global, because it caches meta-data about structs, and an instance can be shared safely: To define custom names for fields, use a struct tag "schema". To not populate certain fields, use a dash for the name and it will be ignored: The supported field types in the destination struct are: Non-supported types are simply ignored, however custom types can be registered to be converted. To fill nested structs, keys must use a dotted notation as the "path" for the field. So for example, to fill the struct Person below: ...the source map must have the keys "Name", "Phone.Label" and "Phone.Number". This means that an HTML form to fill a Person struct must look like this: Single values are filled using the first value for a key from the source map. Slices are filled using all values for a key from the source map. So to fill a Person with multiple Phone values, like: ...an HTML form that accepts three Phone values would look like this: Notice that only for slices of structs the slice index is required. This is needed for disambiguation: if the nested struct also had a slice field, we could not translate multiple values to it if we did not use an index for the parent struct. There's also the possibility to create a custom type that implements the TextUnmarshaler interface, and in this case there's no need to register a converter, like: ...an HTML form that accepts three Email values would look like this: