Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decred. The errors returned by this package are of type txscript.ErrorKind wrapped by txscript.Error which has full support for the standard library errors.Is and errors.As functions. This allows the caller to programmatically determine the specific error while still providing rich error messages with contextual information. See the constants defined with ErrorKind in the package documentation for a full list.
Package auditmanager provides the API client, operations, and parameter types for AWS Audit Manager. Welcome to the Audit Manager API reference. This guide is for developers who need detailed information about the Audit Manager API operations, data types, and errors. Audit Manager is a service that provides automated evidence collection so that you can continually audit your Amazon Web Services usage. You can use it to assess the effectiveness of your controls, manage risk, and simplify compliance. Audit Manager provides prebuilt frameworks that structure and automate assessments for a given compliance standard. Frameworks include a prebuilt collection of controls with descriptions and testing procedures. These controls are grouped according to the requirements of the specified compliance standard or regulation. You can also customize frameworks and controls to support internal audits with specific requirements. Use the following links to get started with the Audit Manager API: Actions Data types Common parameters Common errors If you're new to Audit Manager, we recommend that you review the Audit Manager User Guide.
Package dcrjson provides primitives for working with the Decred JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives to ease this process. In addition, it also provides some additional features such as custom command registration, command categorization, and reflection-based help generation. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with asynchronous transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. Based upon the discussion above, it should be easy to see how the types of this package map into the required parts of the protocol To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications, as well as the provided expected result types, to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package set provides both threadsafe and non-threadsafe implementations of a generic set data structure. In the threadsafe set, safety encompasses all operations on one set. Operations on multiple sets are consistent in that the elements of each set used was valid at exactly one point in time between the start and the end of the operation.
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package dicom provides a set of tools to read, write, and generally work with DICOM (https://dicom.nema.org/) medical image files in Go. dicom.Parse and dicom.Write provide the core functionality to read and write DICOM Datasets. This package provides Go data structures that represent DICOM concepts (for example, dicom.Dataset and dicom.Element). These structures will pretty-print by default and are JSON serializable out of the box. This package provides some advanced functionality as well, including: streaming image frames to an output channel, reading elements one-by-one (like an iterator pattern), flat iteration over nested elements in a Dataset, and more. General usage is simple. Check out the package examples below and some function specific examples. It may also be helpful to take a look at the example cmd/dicomutil program, which is a CLI built around this library to save out image frames from DICOMs and print out metadata to STDOUT.
Package gcs provides an API for building and using a Golomb-coded set filter. A Golomb-Coded Set (GCS) is a space-efficient probabilistic data structure that is used to test set membership with a tunable false positive rate while simultaneously preventing false negatives. In other words, items that are in the set will always match, but items that are not in the set will also sometimes match with the chosen false positive rate. This package currently implements two different versions for backwards compatibility. Version 1 is deprecated and therefore should no longer be used. Version 2 is the GCS variation that follows the specification details in DCP0005: https://github.com/decred/dcps/blob/master/dcp-0005/dcp-0005.mediawiki#golomb-coded-sets. Version 2 sets do not permit empty items (data of zero length) to be added and are parameterized by the following: * A parameter `B` that defines the remainder code bit size * A parameter `M` that defines the false positive rate as `1/M` * A key for the SipHash-2-4 function * The items to include in the set Errors returned by this package are of type gcs.Error. This allows the caller to programmatically determine the specific error by examining the ErrorKind field of the type asserted gcs.Error while still providing rich error messages with contextual information. See ErrorKind in the package documentation for a full list. GCS is used as a mechanism for storing, transmitting, and committing to per-block filters. Consensus-validating full nodes commit to a single filter for every block and serve the filter to SPV clients that match against the filter locally to determine if the block is potentially relevant. The required parameters for Decred are defined by the blockcf2 package. For more details, see the Block Filters section of DCP0005: https://github.com/decred/dcps/blob/master/dcp-0005/dcp-0005.mediawiki#block-filters
Package goparquet is an implementation of the parquet file format in Go. It provides functionality to both read and write parquet files, as well as high-level functionality to manage the data schema of parquet files, to directly write Go objects to parquet files using automatic or custom marshalling and to read records from parquet files into Go objects using automatic or custom marshalling. parquet is a file format to store nested data structures in a flat columnar format. By storing in a column-oriented way, it allows for efficient reading of individual columns without having to read and decode complete rows. This allows for efficient reading and faster processing when using the file format in conjunction with distributed data processing frameworks like Apache Hadoop or distributed SQL query engines like Presto and AWS Athena. This particular implementation is divided into several packages. The top-level package that you're currently viewing is the low-level implementation of the file format. It is accompanied by the sub-packages parquetschema and floor. parquetschema provides functionality to parse textual schema definitions as well as the data types to manually or programmatically construct schema definitions by other means that are open to the user. The textual schema definition format is based on the barely documented schema definition format that is implemented in the parquet Java implementation. See the parquetschema sub-package for further documentation on how to use this package and the grammar of the schema definition format as well as examples. floor is a high-level wrapper around the low-level package. It provides functionality to open parquet files to read from them or to write to them. When reading from parquet files, floor takes care of automatically unmarshal the low-level data into the user-provided Go object. When writing to parquet files, user-provided Go objects are first marshalled to a low-level data structure that is then written to the parquet file. These mechanisms allow to directly read and write Go objects without having to deal with the details of the low-level parquet format. Alternatively, marshalling and unmarshalling can be implemented in a custom manner, giving the user maximum flexibility in case of disparities between the parquet schema definition and the actual Go data structure. For more information, please refer to the floor sub-package's documentation. To aid in working with parquet files, this package also provides a commandline tool named "parquet-tool" that allows you to inspect a parquet file's schema, meta data, row count and content as well as to merge and split parquet files. When operating with parquet files, most users should be able to cover their regular use cases of reading and writing files using just the high-level floor package as well as the parquetschema package. Only if a user has more special requirements in how to work with the parquet files, it is advisable to use this low-level package. To write to a parquet file, the type provided by this package is the FileWriter. Create a new *FileWriter object using the NewFileWriter function. You have a number of options available with which you can influence the FileWriter's behaviour. You can use these options to e.g. set meta data, the compression algorithm to use, the schema definition to use, or whether the data should be written in the V2 format. If you didn't set a schema definition, you then need to manually create columns using the functions NewDataColumn, NewListColumn and NewMapColumn, and then add them to the FileWriter by using the AddColumn method. To further structure your data into groups, use AddGroup to create groups. When you add columns to groups, you need to provide the full column name using dotted notation (e.g. "groupname.fieldname") to AddColumn. Using the AddData method, you can then add records. The provided data is of type map[string]interface{}. This data can be nested: to provide data for a repeated field, the data type to use for the map value is []interface{}. When the provided data is a group, the data type for the group itself again needs to be map[string]interface{}. The data within a parquet file is divided into row groups of a certain size. You can either set the desired row group size as a FileWriterOption, or you can manually check the estimated data size of the current row group using the CurrentRowGroupSize method, and use FlushRowGroup to write the data to disk and start a new row group. Please note that CurrentRowGroupSize only estimates the _uncompressed_ data size. If you've enabled compression, it is impossible to predict the compressed data size, so the actual row groups written to disk may be a lot smaller than uncompressed, depending on how efficiently your data can be compressed. When you're done writing, always use the Close method to flush any remaining data and to write the file's footer. To read from files, create a FileReader object using the NewFileReader function. You can optionally provide a list of columns to read. If these are set, only these columns are read from the file, while all other columns are ignored. If no columns are proided, then all columns are read. With the FileReader, you can then go through the row groups (using PreLoad and SkipRowGroup). and iterate through the row data in each row group (using NextRow). To find out how many rows to expect in total and per row group, use the NumRows and RowGroupNumRows methods. The number of row groups can be determined using the RowGroupCount method.
Package hazelcast provides the Hazelcast Go client. Hazelcast is an open-source distributed in-memory data store and computation platform. It provides a wide variety of distributed data structures and concurrency primitives. Hazelcast Go client is a way to communicate to Hazelcast IMDG clusters and access the cluster data. If you are using Hazelcast and Go Client on the same computer, generally the default configuration should be fine. This is great for trying out the client. However, if you run the client on a different computer than any of the cluster members, you may need to do some simple configurations such as specifying the member addresses. The Hazelcast members and clients have their own configuration options. You may need to reflect some of the member side configurations on the client side to properly connect to the cluster. In order to configure the client, you only need to create a new `hazelcast.Config{}`, which you can pass to `hazelcast.StartNewClientWithConnfig` function: Calling hazelcast.StartNewClientWithConfig with the default configuration is equivalent to hazelcast.StartNewClient. The default configuration assumes Hazelcast is running at localhost:5701 with the cluster name set to dev. If you run Hazelcast members in a different server than the client, you need to make certain changes to client settings. Assuming Hazelcast members are running at hz1.server.com:5701, hz2.server.com:5701 and hz3.server.com:5701 with cluster name production, you would use the configuration below. Note that addresses must include port numbers: You can also load configuration from JSON: If you are changing several options in a configuration section, you may have to repeatedly specify the configuration section: You can simplify the code above by getting a reference to config.Cluster and update it: Note that you should get a reference to the configuration section you are updating, otherwise you would update a copy of it, which doesn't modify the configuration. There are a few options that require a duration, such as config.Cluster.HeartbeatInterval, config.Cluster.Network.ConnectionTimeout and others. You must use types.Duration instead of time.Duration with those options, since types.Duration values support human readable durations when deserialized from text: That corresponds to the following JSON configuration. Refer to https://golang.org/pkg/time/#ParseDuration for the available duration strings: Here are all configuration items with their default values: Checkout the nearcache package for the documentation about the Near Cache. You can listen to creation and destroy events for distributed objects by attaching a listener to the client. A distributed object is created when first referenced unless it already exists. Here is an example: If you don't want to receive any distributed object events, use client.RemoveDistributedObjectListener: Running SQL queries require Hazelcast 5.0 and up. Check out the Hazelcast SQL documentation here: https://docs.hazelcast.com/hazelcast/latest/sql/sql-overview The SQL support should be enabled in Hazelcast server configuration: The client supports two kinds of queries: The ones returning rows (select statements and a few others) and the rest (insert, update, etc.). The former kinds of queries are executed with QuerySQL method and the latter ones are executed with ExecSQL method. Use the question mark (?) for placeholders. To connect to a data source and query it as if it is a table, a mapping should be created. Currently, mappings for Map, Kafka and file data sources are supported. You can read the details about mappings here: https://docs.hazelcast.com/hazelcast/latest/sql/sql-overview#mappings The following data types are supported when inserting/updating. The names in parantheses correspond to SQL types: Using Date/Time In order to force using a specific date/time type, create a time.Time value and cast it to the target type: Hazelcast Management Center can monitor your clients if client-side statistics are enabled. You can enable statistics by setting config.Stats.Enabled to true. Optionally, the period of statistics collection can be set using config.Stats.Period setting. The labels set in configuration appear in the Management Center console:
Package types implements concrete types for marshalling to and from the dcrd JSON-RPC commands, return values, and notifications. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://pkg.go.dev/github.com/decred/dcrd/dcrjson/v4). The types in this package map to the required parts of the protocol as discussed in the dcrjson documentation To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package bloom provides data structures and methods for creating Bloom filters. A Bloom filter is a representation of a set of _n_ items, where the main requirement is to make membership queries; _i.e._, whether an item is a member of a set. A Bloom filter has two parameters: _m_, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and _k_, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo _m_). Set membership is done by _testing_ whether the bits at each value of the hashing functions (again, modulo _m_) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose _k_ and _m_ correctly. In this implementation, the hashing functions used is murmurhash, a non-cryptographic hashing function. This implementation accepts keys for setting as testing as []byte. Thus, to add a string item, "Love": Similarly, to test if "Love" is in bloom: For numeric data, I recommend that you look into the binary/encoding library. But, for example, to add a uint32 to the filter: Finally, there is a method to estimate the false positive rate of a particular Bloom filter for a set of size _n_: Given the particular hashing scheme, it's best to be empirical about this. Note that estimating the FP rate will clear the Bloom filter.
Package skiplist implement skip list data structure. See wikipedia for more details about this data structure. http://en.wikipedia.org/wiki/Skip_list Skip list is basically an ordered map. Here is a sample to use this package.
Package paypal provides a wrapper to PayPal API (https://developer.paypal.com/webapps/developer/docs/api/). The first thing you do is to create a Client (you can select API base URL using paypal contants). Then you can get an access token from PayPal: After you have an access token you can call built-in functions to get data from PayPal. paypal will assign all responses to go structures.
Package bolthold is an indexing and querying layer on top of a Bolt DB. The goal is to allow easy, persistent storage and retrieval of Go types. BoltDB is an embedded key-value store, and bolthold servers a similar use case however with a higher level interface for common uses of BoltDB. BoltHold deals directly with Go Types. When inserting data, you pass in your structure directly. When querying data you pass in a pointer to a slice of the type you want to return. By default Gob encoding is used. You can put multiple different types into the same DB file and they (and their indexes) will be stored separately. BoltHold will automatically create an index for any struct fields tags with "boltholdIndex" The first field specified in query will be used as the index (if one exists). Queries are chained together criteria that applies to a set of fields:
Package weightedrand contains a performant data structure and algorithm used to randomly select an element from some kind of list, where the chances of each element to be selected not being equal, but defined by relative "weights" (or probabilities). This is called weighted random selection. Compare this package with (github.com/jmcvetta/randutil).WeightedChoice, which is optimized for the single operation case. In contrast, this package creates a presorted cache optimized for binary search, allowing for repeated selections from the same set to be significantly faster, especially for large data sets. In this example, we create a Chooser to pick from amongst various emoji fruit runes. We assign a numeric weight to each choice. These weights are relative, not on any absolute scoring system. In this trivial case, we will assign a weight of 0 to all but one fruit, so that the output will be predictable.
Package gi is the top-level repository for the GoGi GUI framework. All of the code is in the sub-packages within this repository: * gist: css-based styling settings, including Color * girl: rendering library, can be used standalone, SVG compliant * gi: the main 2D GUI Node, Widgets, and Window * giv: more complex Views of Go data structures, supporting Model-View paradigm. * svg: full SVG rendering framework, used for Icons in gi. * gi3d: 3D rendering of a Scene within 2D windows -- full interactive 3D scenegraph. * histyle: text syntax-based highlighting styles -- used in giv.TextView * python: access all of GoGi from within Python using GoPy system.
Package logfmt implements utilities to marshal and unmarshal data in the logfmt format. The logfmt format records key/value pairs in a way that balances readability for humans and simplicity of computer parsing. It is most commonly used as a more human friendly alternative to JSON for structured logging.
Package skylark provides a Skylark interpreter. Skylark values are represented by the Value interface. The following built-in Value types are known to the evaluator: Client applications may define new data types that satisfy at least the Value interface. Such types may provide additional operations by implementing any of these optional interfaces: Client applications may also define domain-specific functions in Go and make them available to Skylark programs. Use NewBuiltin to construct a built-in value that wraps a Go function. The implementation of the Go function may use UnpackArgs to make sense of the positional and keyword arguments provided by the caller. Skylark's None value is not equal to Go's nil, but nil may be assigned to a Skylark Value. Be careful to avoid allowing Go nil values to leak into Skylark data structures. The Compare operation requires two arguments of the same type, but this constraint cannot be expressed in Go's type system. (This is the classic "binary method problem".) So, each Value type's CompareSameType method is a partial function that compares a value only against others of the same type. Use the package's standalone Compare (or Equal) function to compare an arbitrary pair of values. To parse and evaluate a Skylark source file, use ExecFile. The Eval function evaluates a single expression. All evaluator functions require a Thread parameter which defines the "thread-local storage" of a Skylark thread and may be used to plumb application state through Sklyark code and into callbacks. When evaluation fails it returns an EvalError from which the application may obtain a backtrace of active Skylark calls.
Package graph is a library for creating generic graph data structures and modifying, analyzing, and visualizing them. A graph consists of vertices of type T, which are identified by a hash value of type K. The hash value for a given vertex is obtained using the hashing function passed to New. A hashing function takes a T and returns a K. For primitive types like integers, you may use a predefined hashing function such as IntHash – a function that takes an integer and uses that integer as the hash value at the same time: For storing custom data types, you need to provide your own hashing function. This example takes a City instance and returns its name as the hash value: Creating a graph using this hashing function will yield a graph of vertices of type City identified by hash values of type string. Adding vertices to a graph of integers is simple. graph.Graph.AddVertex takes a vertex and adds it to the graph. Most functions accept and return only hash values instead of entire instances of the vertex type T. For example, graph.Graph.AddEdge creates an edge between two vertices and accepts the hash values of those vertices. Because this graph uses the IntHash hashing function, the vertex values and hash values are the same. All operations that modify the graph itself are methods of Graph. All other operations are top-level functions of by this library. For detailed usage examples, take a look at the README.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM.
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package cadence and its subdirectories contain the Cadence client side framework. The Cadence service is a task orchestrator for your application’s tasks. Applications using Cadence can execute a logical flow of tasks, especially long-running business logic, asynchronously or synchronously. They can also scale at runtime on distributed systems. A quick example illustrates its use case. Consider Uber Eats where Cadence manages the entire business flow from placing an order, accepting it, handling shopping cart processes (adding, updating, and calculating cart items), entering the order in a pipeline (for preparing food and coordinating delivery), to scheduling delivery as well as handling payments. Cadence consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate tasks in Go code. The root cadence package contains common data structures. The subpackages are: The Cadence hosted service brokers and persists events generated during workflow execution. Worker nodes owned and operated by customers execute the coordination and task logic. To facilitate the implementation of worker nodes Cadence provides a client-side library for the Go language. In Cadence, you can code the logical flow of events separately as a workflow and code business logic as activities. The workflow identifies the activities and sequences them, while an activity executes the logic. Dynamic workflow execution graphs - Determine the workflow execution graphs at runtime based on the data you are processing. Cadence does not pre-compute the execution graphs at compile time or at workflow start time. Therefore, you have the ability to write workflows that can dynamically adjust to the amount of data they are processing. If you need to trigger 10 instances of an activity to efficiently process all the data in one run, but only 3 for a subsequent run, you can do that. Child Workflows - Orchestrate the execution of a workflow from within another workflow. Cadence will return the results of the child workflow execution to the parent workflow upon completion of the child workflow. No polling is required in the parent workflow to monitor status of the child workflow, making the process efficient and fault tolerant. Durable Timers - Implement delayed execution of tasks in your workflows that are robust to worker failures. Cadence provides two easy to use APIs, **workflow.Sleep** and **workflow.Timer**, for implementing time based events in your workflows. Cadence ensures that the timer settings are persisted and the events are generated even if workers executing the workflow crash. Signals - Modify/influence the execution path of a running workflow by pushing additional data directly to the workflow using a signal. Via the Signal facility, Cadence provides a mechanism to consume external events directly in workflow code. Task routing - Efficiently process large amounts of data using a Cadence workflow, by caching the data locally on a worker and executing all activities meant to process that data on that same worker. Cadence enables you to choose the worker you want to execute a certain activity by scheduling that activity execution in the worker's specific task-list. Unique workflow ID enforcement - Use business entity IDs for your workflows and let Cadence ensure that only one workflow is running for a particular entity at a time. Cadence implements an atomic "uniqueness check" and ensures that no race conditions are possible that would result in multiple workflow executions for the same workflow ID. Therefore, you can implement your code to attempt to start a workflow without checking if the ID is already in use, even in the cases where only one active execution per workflow ID is desired. Perpetual/ContinueAsNew workflows - Run periodic tasks as a single perpetually running workflow. With the "ContinueAsNew" facility, Cadence allows you to leverage the "unique workflow ID enforcement" feature for periodic workflows. Cadence will complete the current execution and start the new execution atomically, ensuring you get to keep your workflow ID. By starting a new execution Cadence also ensures that workflow execution history does not grow indefinitely for perpetual workflows. At-most once activity execution - Execute non-idempotent activities as part of your workflows. Cadence will not automatically retry activities on failure. For every activity execution Cadence will return a success result, a failure result, or a timeout to the workflow code and let the workflow code determine how each one of those result types should be handled. Asynch Activity Completion - Incorporate human input or thrid-party service asynchronous callbacks into your workflows. Cadence allows a workflow to pause execution on an activity and wait for an external actor to resume it with a callback. During this pause the activity does not have any actively executing code, such as a polling loop, and is merely an entry in the Cadence datastore. Therefore, the workflow is unaffected by any worker failures happening over the duration of the pause. Activity Heartbeating - Detect unexpected failures/crashes and track progress in long running activities early. By configuring your activity to report progress periodically to the Cadence server, you can detect a crash that occurs 10 minutes into an hour-long activity execution much sooner, instead of waiting for the 60-minute execution timeout. The recorded progress before the crash gives you sufficient information to determine whether to restart the activity from the beginning or resume it from the point of failure. Timeouts for activities and workflow executions - Protect against stuck and unresponsive activities and workflows with appropriate timeout values. Cadence requires that timeout values are provided for every activity or workflow invocation. There is no upper bound on the timeout values, so you can set timeouts that span days, weeks, or even months. Visibility - Get a list of all your active and/or completed workflow. Explore the execution history of a particular workflow execution. Cadence provides a set of visibility APIs that allow you, the workflow owner, to monitor past and current workflow executions. Debuggability - Replay any workflow execution history locally under a debugger. The Cadence client library provides an API to allow you to capture a stack trace from any failed workflow execution history.
Package paypal provides a wrapper to PayPal API (https://developer.paypal.com/webapps/developer/docs/api/). The first thing you do is to create a Client (you can select API base URL using paypal contants). Then you can get an access token from PayPal: After you have an access token you can call built-in functions to get data from PayPal. paypal will assign all responses to go structures.
Package restruct implements packing and unpacking of raw binary formats. Structures can be created with struct tags annotating the on-disk or in-memory layout of the structure, using the "struct" struct tag, like so: To unpack data in memory to this structure, simply use Unpack with a byte slice:
Package types implements concrete types for the dcrwallet JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://godoc.org/github.com/decred/dcrd/dcrjson). The types in this package map to the required parts of the protocol as discussed in the dcrjson documention To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package nutsdb implements a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transactions. And it also supports data structure such as list、set、sorted set etc. NutsDB currently works on Mac OS, Linux and Windows. NutsDB has the following main types: DB, BPTree, Entry, DataFile And Tx. and NutsDB supports bucket, A bucket is a collection of unique keys that are associated with values. All operations happen inside a Tx. Tx represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key , or iterate over a set of key-value pairs (prefix scanning or range scanning). read-write transactions can also update and delete keys from the DB. See the examples for more usage details.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. The officially supported drivers are: The Filesystem and Memory drivers are also available, but in early stages of development, and so many features do not yet work: The kivik driver system is modeled after the standard library's `sql` and `sql/driver` packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB. couchDB stores JSON, so Kivik translates Go data structures to and from JSON as necessary. The conversion between Go data types and JSON, and vice versa, is handled automatically according to the rules and behavior described in the documentationf or the standard library's `encoding/json` package (https://golang.org/pkg/encoding/json). One would be well-advised to become familiar with using `json` struct field tags (https://golang.org/pkg/encoding/json/#Marshal) when working with JSON documents. Most Kivik methods take `context.Context` as their first argument. This allows the cancellation of blocking operations in the case that the result is no longer needed. A typical use case for a web application would be to cancel a Kivik request if the remote HTTP client ahs disconnected, rednering the results of the query irrelevant. To learn more about Go's contexts, read the `context` package documentation (https://golang.org/pkg/context/) and read the Go blog post "Go Concurrency Patterns: Context" (https://blog.golang.org/context) for example code. If in doubt, you can pass `context.TODO()` as the context variable. Example: Kivik returns errors that embed an HTTP status code. In most cases, this is the HTTP status code returned by the server. The embedded HTTP status code may be accessed easily using the StatusCode() method, or with a type assertion to `interface { StatusCode() int }`. Example: Any error that does not conform to this interface will be assumed to represent a http.StatusInternalServerError status code. For common usage, authentication should be as simple as including the authentication credentials in the connection DSN. For example: This will connect to `localhost` on port 5984, using the username `admin` and the password `abc123`. When connecting to CouchDB (as in the above example), this will use cookie auth (https://docs.couchdb.org/en/stable/api/server/authn.html?highlight=cookie%20auth#cookie-authentication). Depending on which driver you use, there may be other ways to authenticate, as well. At the moment, the CouchDB driver is the only official driver which offers additional authentication methods. Please refer to the CouchDB package documentation for details (https://pkg.go.dev/github.com/go-kivik/couchdb/v3). With a client handle in hand, you can create a database handle with the DB() method to interact with a specific database.
Package paypal provides a wrapper to PayPal API (https://developer.paypal.com/webapps/developer/docs/api/). The first thing you do is to create a Client (you can select API base URL using paypal contants). Then you can get an access token from PayPal: After you have an access token you can call built-in functions to get data from PayPal. paypal will assign all responses to go structures.
Package log provides a structured logger. Structured logging produces logs easily consumed later by humans or machines. Humans might be interested in debugging errors, or tracing specific requests. Machines might be interested in counting interesting events, or aggregating information for off-line processing. In both cases, it is important that the log messages are structured and actionable. Package log is designed to encourage both of these best practices. The fundamental interface is Logger. Loggers create log events from key/value data. The Logger interface has a single method, Log, which accepts a sequence of alternating key/value pairs, which this package names keyvals. Here is an example of a function using a Logger to create log events. The keys in the above example are "taskID" and "event". The values are task.ID, "starting task", and "task complete". Every key is followed immediately by its value. Keys are usually plain strings. Values may be any type that has a sensible encoding in the chosen log format. With structured logging it is a good idea to log simple values without formatting them. This practice allows the chosen logger to encode values in the most appropriate way. A contextual logger stores keyvals that it includes in all log events. Building appropriate contextual loggers reduces repetition and aids consistency in the resulting log output. With, WithPrefix, and WithSuffix add context to a logger. We can use With to improve the RunTask example. The improved version emits the same log events as the original for the first and last calls to Log. Passing the contextual logger to taskHelper enables each log event created by taskHelper to include the task.ID even though taskHelper does not have access to that value. Using contextual loggers this way simplifies producing log output that enables tracing the life cycle of individual tasks. (See the Contextual example for the full code of the above snippet.) A Valuer function stored in a contextual logger generates a new value each time an event is logged. The Valuer example demonstrates how this feature works. Valuers provide the basis for consistently logging timestamps and source code location. The log package defines several valuers for that purpose. See Timestamp, DefaultTimestamp, DefaultTimestampUTC, Caller, and DefaultCaller. A common logger initialization sequence that ensures all log entries contain a timestamp and source location looks like this: Applications with multiple goroutines want each log event written to the same logger to remain separate from other log events. Package log provides two simple solutions for concurrent safe logging. NewSyncWriter wraps an io.Writer and serializes each call to its Write method. Using a SyncWriter has the benefit that the smallest practical portion of the logging logic is performed within a mutex, but it requires the formatting Logger to make only one call to Write per log event. NewSyncLogger wraps any Logger and serializes each call to its Log method. Using a SyncLogger has the benefit that it guarantees each log event is handled atomically within the wrapped logger, but it typically serializes both the formatting and output logic. Use a SyncLogger if the formatting logger may perform multiple writes per log event. This package relies on the practice of wrapping or decorating loggers with other loggers to provide composable pieces of functionality. It also means that Logger.Log must return an error because some implementations—especially those that output log data to an io.Writer—may encounter errors that cannot be handled locally. This in turn means that Loggers that wrap other loggers should return errors from the wrapped logger up the stack. Fortunately, the decorator pattern also provides a way to avoid the necessity to check for errors every time an application calls Logger.Log. An application required to panic whenever its Logger encounters an error could initialize its logger as follows.
Package redigomock is a mock for redigo library (redis client) Redigomock basically register the commands with the expected results in a internal global variable. When the command is executed via Conn interface, the mock will look to this global variable to retrieve the corresponding result. To start a mocked connection just do the following: Now you can inject it whenever your system needs a redigo.Conn because it satisfies all interface requirements. Before running your tests you need beyond of mocking the connection, registering the expected results. For that you can generate commands with the expected results. As the Expect method from Command receives anything (interface{}), another method was created to easy map the result to your structure. For that use ExpectMap: You should also test the error cases, and you can do it in the same way of a normal result. Sometimes you will want to register a command regardless the arguments, and you can do it with the method GenericCommand (mainly with the HMSET). All commands are registered in a global variable, so they will be there until all your test cases ends. So for good practice in test writing you should in the beginning of each test case clear the mock states. Let's see a full test example. Imagine a Person structure and a function that pick up this person in Redis using redigo library (file person.go): Now we need to test it, so let's create the corresponding test with redigomock (fileperson_test.go): When you use redis as a persistent list, then you might want to call the same redis command multiple times. For example: To test it, you can chain redis responses. Let's write a test case: In the first iteration of the loop redigomock would return "www.some.url.com", then "www.another.url.com" and finally redis.ErrNil. Sometimes providing expected arguments to redigomock at compile time could be too constraining. Let's imagine you use redis hash sets to store some data, along with the timestamp of the last data update. Let's expand our Person struct: And add a function updating personal data (phone number for example). Please notice that the update timestamp can't be determined at compile time: Unit test: As you can see at the position of current timestamp redigomock is told to match AnyInt struct created by NewAnyInt() method. AnyInt struct will match any integer passed to redigomock from the tested method. Please see fuzzyMatch.go file for more details.
Package temporal and its subdirectories contain the Temporal client side framework. The Temporal service is a task orchestrator for your application’s tasks. Applications using Temporal can execute a logical flow of tasks, especially long-running business logic, asynchronously or synchronously. They can also scale at runtime on distributed systems. A quick example illustrates its use case. Consider Uber Eats where Temporal manages the entire business flow from placing an order, accepting it, handling shopping cart processes (adding, updating, and calculating cart items), entering the order in a pipeline (for preparing food and coordinating delivery), to scheduling delivery as well as handling payments. Temporal consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate tasks in Go code. The root temporal package contains common data structures. The subpackages are: The Temporal hosted service brokers and persists events generated during workflow execution. Worker nodes owned and operated by customers execute the coordination and task logic. To facilitate the implementation of worker nodes Temporal provides a client-side library for the Go language. In Temporal, you can code the logical flow of events separately as a workflow and code business logic as activities. The workflow identifies the activities and sequences them, while an activity executes the logic. Dynamic workflow execution graphs - Determine the workflow execution graphs at runtime based on the data you are processing. Temporal does not pre-compute the execution graphs at compile time or at workflow start time. Therefore, you have the ability to write workflows that can dynamically adjust to the amount of data they are processing. If you need to trigger 10 instances of an activity to efficiently process all the data in one run, but only 3 for a subsequent run, you can do that. Child Workflows - Orchestrate the execution of a workflow from within another workflow. Temporal will return the results of the child workflow execution to the parent workflow upon completion of the child workflow. No polling is required in the parent workflow to monitor status of the child workflow, making the process efficient and fault tolerant. Durable Timers - Implement delayed execution of tasks in your workflows that are robust to worker failures. Temporal provides two easy to use APIs, **workflow.Sleep** and **workflow.Timer**, for implementing time based events in your workflows. Temporal ensures that the timer settings are persisted and the events are generated even if workers executing the workflow crash. Signals - Modify/influence the execution path of a running workflow by pushing additional data directly to the workflow using a signal. Via the Signal facility, Temporal provides a mechanism to consume external events directly in workflow code. Task routing - Efficiently process large amounts of data using a Temporal workflow, by caching the data locally on a worker and executing all activities meant to process that data on that same worker. Temporal enables you to choose the worker you want to execute a certain activity by scheduling that activity execution in the worker's specific task queue. Unique workflow ID enforcement - Use business entity IDs for your workflows and let Temporal ensure that only one workflow is running for a particular entity at a time. Temporal implements an atomic "uniqueness check" and ensures that no race conditions are possible that would result in multiple workflow executions for the same workflow ID. Therefore, you can implement your code to attempt to start a workflow without checking if the ID is already in use, even in the cases where only one active execution per workflow ID is desired. Perpetual/ContinueAsNew workflows - Run periodic tasks as a single perpetually running workflow. With the "ContinueAsNew" facility, Temporal allows you to leverage the "unique workflow ID enforcement" feature for periodic workflows. Temporal will complete the current execution and start the new execution atomically, ensuring you get to keep your workflow ID. By starting a new execution Temporal also ensures that workflow execution history does not grow indefinitely for perpetual workflows. At-most once activity execution - Execute non-idempotent activities as part of your workflows. Temporal will not automatically retry activities on failure. For every activity execution Temporal will return a success result, a failure result, or a timeout to the workflow code and let the workflow code determine how each one of those result types should be handled. Asynch Activity Completion - Incorporate human input or thrid-party service asynchronous callbacks into your workflows. Temporal allows a workflow to pause execution on an activity and wait for an external actor to resume it with a callback. During this pause the activity does not have any actively executing code, such as a polling loop, and is merely an entry in the Temporal datastore. Therefore, the workflow is unaffected by any worker failures happening over the duration of the pause. Activity Heartbeating - Detect unexpected failures/crashes and track progress in long running activities early. By configuring your activity to report progress periodically to the Temporal server, you can detect a crash that occurs 10 minutes into an hour-long activity execution much sooner, instead of waiting for the 60-minute execution timeout. The recorded progress before the crash gives you sufficient information to determine whether to restart the activity from the beginning or resume it from the point of failure. Timeouts for activities and workflow executions - Protect against stuck and unresponsive activities and workflows with appropriate timeout values. Temporal requires that timeout values are provided for every activity or workflow invocation. There is no upper bound on the timeout values, so you can set timeouts that span days, weeks, or even months. Visibility - Get a list of all your active and/or completed workflow. Explore the execution history of a particular workflow execution. Temporal provides a set of visibility APIs that allow you, the workflow owner, to monitor past and current workflow executions. Debuggability - Replay any workflow execution history locally under a debugger. The Temporal client library provides an API to allow you to capture a stack trace from any failed workflow execution history.
Package merkledag implements the IPFS Merkle DAG data structures.
Package tdigest provides a highly accurate mergeable data-structure for quantile estimation. Typical T-Digest use cases involve accumulating metrics on several distinct nodes of a cluster and then merging them together to get a system-wide quantile overview. Things such as: sensory data from IoT devices, quantiles over enormous document datasets (think ElasticSearch), performance metrics for distributed systems, etc. After you create (and configure, if desired) the digest: You can then use it for registering measurements: Estimating quantiles: And merging with another digest:
bíogo is a bioinformatics library for the Go language. It is a work in progress. bíogo stems from the need to address the size and structure of modern genomic and metagenomic data sets. These properties enforce requirements on the libraries and languages used for analysis: In addition to the computational burden of massive data set sizes in modern genomics there is an increasing need for complex pipelines to resolve questions in tightening problem space and also a developing need to be able to develop new algorithms to allow novel approaches to interesting questions. These issues suggest the need for a simplicity in syntax to facilitate: Related to the second issue is the reluctance of some researchers to release code because of quality concerns http://www.nature.com/news/2010/101013/full/467753a.html The issue of code release is the first of the principles formalised in the Science Code Manifesto http://sciencecodemanifesto.org/ A language with a simple, yet expressive, syntax should facilitate development of higher quality code and thus help reduce this barrier to research code release. It seems that nearly every language has it own bioinformatics library, some of which are very mature, for example BioPerl and BioPython. Why add another one? The different libraries excel in different fields, acting as scripting glue for applications in a pipeline (much of [1-3]) and interacting with external hosts [1, 2, 4, 5], wrapping lower level high performance languages with more user friendly syntax [1-4] or providing bioinformatics functions for high performance languages [5, 6]. The intended niche for bíogo lies somewhere between the scripting libraries and high performance language libraries in being easy to use for both small and large projects while having reasonable performance with computationally intensive tasks. The intent is to reduce the level of investment required to develop new research software for computationally intensive tasks. The bíogo library structure is influenced both by the structure of BioPerl and the Go core libraries. The coding style should be aligned with normal Go idioms as represented in the Go core libraries. Position numbering in the bíogo library conforms to the zero-based indexing of Go and range indexing conforms to Go's half-open zero-based slice indexing. This is at odds with the 'normal' inclusive indexing used by molecular biologists. This choice was made to avoid inconsistent indexing spaces being used — one-based inclusive for bíogo functions and methods and zero-based for native Go slices and arrays — and so avoid errors that this would otherwise facilitate. Note that the GFF package does allow, and defaults to, one-based inclusive indexing in its input and output of GFF files. Quality scores are supported for all sequence types, including protein. Phred and Solexa scoring systems are able to be read from files, however internal representation of quality scores is with Phred, so there will be precision loss in conversion. A Solexa quality score type is provided for use where this will be a problem. Copyright ©2011-2012 The bíogo Authors except where otherwise noted. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Package types implements concrete types for marshalling to and from the dcrd JSON-RPC commands, return values, and notifications. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://pkg.go.dev/github.com/decred/dcrd/dcrjson/v4). The types in this package map to the required parts of the protocol as discussed in the dcrjson documentation: To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package weightedrand contains a performant data structure and algorithm used to randomly select an element from some kind of list, where the chances of each element to be selected not being equal, but defined by relative "weights" (or probabilities). This is called weighted random selection. This package creates a presorted cache optimized for binary search, allowing for repeated selections from the same set to be significantly faster, especially for large data sets. In this example, we create a Chooser to pick from amongst various emoji fruit runes. We assign a numeric weight to each choice. These weights are relative, not on any absolute scoring system. In this trivial case, we will assign a weight of 0 to all but one fruit, so that the output will be predictable.
Package gcs provides an API for building and using a Golomb-coded set filter. A Golomb-Coded Set (GCS) is a space-efficient probabilistic data structure that is used to test set membership with a tunable false positive rate while simultaneously preventing false negatives. In other words, items that are in the set will always match, but items that are not in the set will also sometimes match with the chosen false positive rate. This package currently implements two different versions for backwards compatibility. Version 1 is deprecated and therefore should no longer be used. Version 2 is the GCS variation that follows the specification details in DCP0005: https://github.com/decred/dcps/blob/master/dcp-0005/dcp-0005.mediawiki#golomb-coded-sets. Version 2 sets do not permit empty items (data of zero length) to be added and are parameterized by the following: * A parameter `B` that defines the remainder code bit size * A parameter `M` that defines the false positive rate as `1/M` * A key for the SipHash-2-4 function * The items to include in the set The errors returned by this package are of type gcs.Error. This allows the caller to programmatically determine the specific error by examining the ErrorKind field of the type asserted gcs.Error while still providing rich error messages with contextual information. See ErrorKind in the package documentation for a full list. GCS is used as a mechanism for storing, transmitting, and committing to per-block filters. Consensus-validating full nodes commit to a single filter for every block and serve the filter to SPV clients that match against the filter locally to determine if the block is potentially relevant. The required parameters for Decred are defined by the blockcf2 package. For more details, see the Block Filters section of DCP0005: https://github.com/decred/dcps/blob/master/dcp-0005/dcp-0005.mediawiki#block-filters
Package ajson implements decoding of JSON as defined in RFC 7159 without predefined mapping to a struct of golang, with support of JSONPath. All JSON structs reflects to a custom struct of Node, witch can be presented by it type and value. Method Unmarshal will scan all the byte slice to create a root node of JSON structure, with all it behaviors. Each Node has it's own type and calculated value, which will be calculated on demand. Calculated value saves in atomic.Value, so it's thread safe. Method JSONPath will returns slice of founded elements in current JSON data, by it's JSONPath. JSONPath selection described at http://goessner.net/articles/JsonPath/ JSONPath expressions always refer to a JSON structure in the same way as XPath expression are used in combination with an XML document. Since a JSON structure is usually anonymous and doesn't necessarily have a "root member object" JSONPath assumes the abstract name $ assigned to the outer level object. JSONPath expressions can use the dot–notation or the bracket–notation for input pathes. Internal or output pathes will always be converted to the more general bracket–notation. JSONPath allows the wildcard symbol * for member names and array indices. It borrows the descendant operator '..' from E4X and the array slice syntax proposal [start:end:step] from ECMASCRIPT 4. Expressions of the underlying scripting language (<expr>) can be used as an alternative to explicit names or indices as in using the symbol '@' for the current object. Filter expressions are supported via the syntax ?(<boolean expr>) as in Here is a complete overview and a side by side comparison of the JSONPath syntax elements with its XPath counterparts. Package has several predefined constants. You are free to add new one with AddConstant Package has several predefined operators. You are free to add new one with AddOperator Operator precedence: https://golang.org/ref/spec#Operator_precedence Arithmetic operators: https://golang.org/ref/spec#Arithmetic_operators Package has several predefined functions. You are free to add new one with AddFunction
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. The officially supported drivers are: The Filesystem and Memory drivers are also available, but in early stages of development, and so many features do not yet work: The kivik driver system is modeled after the standard library's `sql` and `sql/driver` packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB. The most methods, including those on Client and DB are safe to call concurrently, unless otherwise noted. CouchDB stores JSON, so Kivik translates Go data structures to and from JSON as necessary. The conversion from Go data types to JSON, and vice versa, is handled automatically according to the rules and behavior described in the documentation for the standard library's encoding/json package. Most client and database methods take optional arguments of the type Option. Multiple options may be passed, and latter options take precedence over earlier ones, in case of a conflict. Params and Param can be used to set options that are generally converted to URL query parameters. Different backend drivers may also provide their own unique options with driver-specific effects. Consult your driver's documentation for specifics. Kivik returns errors that embed an HTTP status code. In most cases, this is the HTTP status code returned by the server. The embedded HTTP status code may be accessed easily using the HTTPStatus() method, or with a type assertion to `interface { HTTPStatus() int }`. Example: Any error that does not conform to this interface will be assumed to represent a http.StatusInternalServerError status code. For common usage, authentication should be as simple as including the authentication credentials in the connection DSN. For example: This will connect to `localhost` on port 5984, using the username `admin` and the password `abc123`. When connecting to CouchDB (as in the above example), this will use cookie auth. Depending on which driver you use, there may be other ways to authenticate, as well. At the moment, the CouchDB driver is the only official driver which offers additional authentication methods. Please refer to the CouchDB package documentation for details. With a client handle in hand, you can create a database handle with the DB() method to interact with a specific database.
Package circllhist provides an implementation of Circonus' fixed log-linear histogram data structure. This allows tracking of histograms in a composable way such that accurate error can be reasoned about.
Package structhash creates hash strings from arbitrary go data structures.
Package wincred provides primitives for accessing the Windows Credentials Management API. This includes functions for retrieval, listing and storage of credentials as well as Go structures for convenient access to the credential data. A more detailed description of Windows Credentials Management can be found on Docs: https://docs.microsoft.com/en-us/windows/desktop/SecAuthN/credentials-management
Package fig loads configuration files and/or environment variables into Go structs with extra juice for validating fields and setting defaults. Config files may be defined in yaml, json or toml format. When you call `Load()`, fig takes the following steps: Define your configuration file in the root of your project: Define your struct and load it: Pass options as additional parameters to `Load()` to configure fig's behaviour. Do not look for any configuration file with `IgnoreFile()`. If IgnoreFile is given then any other configuration file related options like `File` and `Dirs` are simply ignored. File & Dirs By default fig searches for a file named `config.yaml` in the directory it is run from. Change the file and directories fig searches in with `File()` and `Dirs()`. Fig searches for the file in dirs sequentially and uses the first matching file. The decoder (yaml/json/toml) used is picked based on the file's extension. The struct tag key tag fig looks for to find the field's alt name can be changed using `Tag()`. By default fig uses the tag key `fig`. Fig can be configured to additionally set fields using the environment. This behaviour can be enabled using the option `UseEnv(prefix)`. If loading from file is also enabled then first the struct is loaded from a config file and thus any values found in the environment will overwrite existing values in the struct. Prefix is a string that will be prepended to the keys that are searched in the environment. Although discouraged, prefix may be left empty. Fig searches for keys in the form PREFIX_FIELD_PATH, or if prefix is left empty then FIELD_PATH. A field's path is formed by prepending its name with the names of all the surrounding structs up to the root struct, upper-cased and separated by an underscore. If a field has an alt name defined in its struct tag then that name is preferred over its struct name. With the struct above and `UseEnv("myapp")` fig would search for the following environment variables: Fields contained in struct slices whose elements already exists can be also be set via the environment in the form PARENT_IDX_FIELD, where idx is the index of the field in the slice. With the config above individual servers may be configured with the following environment variable: Note: the Server slice must already have members inside it (i.e. from loading of the configuration file) for the containing fields to be altered via the environment. Fig will not instantiate and insert elements into the slice. Maps and map values cannot be populated from the environment. Change the layout fig uses to parse times using `TimeLayout()`. By default fig parses time using the `RFC.3339` layout (`2006-01-02T15:04:05Z07:00`). By default fig ignores any fields in the config file that are not present in the struct. This behaviour can be changed using `UseStrict()` to achieve strict parsing. When strict parsing is enabled, extra fields in the config file will cause an error. A validate key with a required value in the field's struct tag makes fig check if the field has been set after it's been loaded. Required fields that are not set are returned as an error. Fig uses the following properties to check if a field is set: See example below to help understand: A default key in the field tag makes fig fill the field with the value specified when the field is not otherwise set. Fig attempts to parse the value based on the field's type. If parsing fails then an error is returned. A default value can be set for the following types: Successive elements of slice defaults should be separated by a comma. The entire slice can optionally be enclosed in square brackets: Boolean values: Fig cannot distinguish between false and an unset value for boolean types. As a result, default values for booleans are not currently supported. Maps: Maps are not supported because providing a map in a string form would be complex and error-prone. Users are encouraged to use structs instead for more reliable and structured data handling. Map values: Values retrieved from a map through reflection are not addressable. Therefore, setting default values for map values is not currently supported. The required validation and the default field tags are mutually exclusive as they are contradictory. This is not allowed: A wrapped error `ErrFileNotFound` is returned when fig is not able to find a config file to load. This can be useful for instance to fallback to a different configuration loading mechanism.
Package gomarkdoc formats documentation for one or more packages as markdown for usage outside of the main https://pkg.go.dev site. It supports custom templates for tweaking representation of documentation at fine-grained levels, exporting both exported and unexported symbols, and custom formatters for different backends. If you want to use this package as a command-line tool, you can install the command by running the following on go 1.16+: For older versions of go, you can install using the following method instead: The command line tool supports configuration for all of the features of the importable package: The gomarkdoc command processes each of the provided packages, generating documentation for the package in markdown format and writing it to console. For example, if you have a package in your current directory and want to send it to a documentation markdown file, you might do something like this: The gomarkdoc tool supports generating documentation for both local packages and remote ones. To specify a local package, start the name of the package with a period (.) or specify an absolute path on the filesystem. All other package signifiers are assumed to be remote packages. You may specify both local and remote packages in the same command invocation as separate arguments. If you have a project with many packages but you want to skip documentation generation for some, you can use the --exclude-dirs option. This will remove any matching directories from the list of directories to process. Excluded directories are specified using the same pathing syntax as the packages to process. Multiple expressions may be comma-separated or specified by using the --exclude-dirs flag multiple times. For example, in this repository we generate documentation for the entire project while excluding our test packages by running: By default, the documentation generated by the gomarkdoc command is sent to standard output, where it can be redirected to a file. This can be useful if you want to perform additional modifications to the documentation or send it somewhere other than a file. However, keep in mind that there are some inconsistencies in how various shells/platforms handle redirected command output (for example, Powershell encodes in UTF-16, not UTF-8). As a result, the --output option described below is recommended for most use cases. If you want to redirect output for each processed package to a file, you can provide the --output/-o option, which accepts a template specifying how to generate the path of the output file. A common usage of this option is when generating README documentation for a package with subpackages (which are supported via the ... signifier as in other parts of the golang toolchain). In addition, this option provides consistent behavior across platforms and shells: You can see all of the data available to the output template in the PackageSpec struct in the github.com/princjef/gomarkdoc/cmd/gomarkdoc package. The documentation information that is output is formatted using a series of text templates for the various components of the overall documentation which get generated. Higher level templates contain lower level templates, but any template may be replaced with an override template using the --template/-t option. The full list of templates that may be overridden are: file: generates documentation for a file containing one or more packages, depending on how the tool is configured. This is the root template for documentation generation. package: generates documentation for an entire package. type: generates documentation for a single type declaration, as well as any related functions/methods. func: generates documentation for a single function or method. It may be referenced from within a type, or directly in the package, depending on nesting. value: generates documentation for a single variable or constant declaration block within a package. index: generates an index of symbols within a package, similar to what is seen for godoc.org. The index links to types, funcs, variables, and constants generated by other templates, so it may need to be overridden as well if any of those templates are changed in a material way. example: generates documentation for a single example for a package or one of its symbols. The example is generated alongside whichever symbol it represents, based on the standard naming conventions outlined in https://blog.golang.org/examples#TOC_4. doc: generates the freeform documentation block for any of the above structures that can contain a documentation section. import: generates the import code used to pull in a package. Overriding with the --template-file option uses a key-value pair mapping a template name to the file containing the contents of the override template to use. Specified template files must exist: As with the godoc tool itself, only exported symbols will be shown in documentation. This can be expanded to include all symbols in a package by adding the --include-unexported/-u flag. If you want to blend the documentation generated by gomarkdoc with your own hand-written markdown, you can use the --embed/-e flag to change the gomarkdoc tool into an append/embed mode. When documentation is generated, gomarkdoc looks for a file in the location where the documentation is to be written and embeds the documentation if present. Otherwise, the documentation is appended to the end of the file. When running with embed mode enabled, gomarkdoc will look for either this single comment: Or the following pair of comments (in which case all content in between is replaced): If you would like to include files that are part of a build tag, you can specify build tags with the --tags flag. Tags are also supported through GOFLAGS, though command line and configuration file definitions override tags specified through GOFLAGS. You can also run gomarkdoc in a verification mode with the --check/-c flag. This is particularly useful for continuous integration when you want to make sure that a commit correctly updated the generated documentation. This flag is only supported when the --output/-o flag is specified, as the file provided there is what the tool is checking: If you're experiencing difficulty with gomarkdoc or just want to get more information about how it's executing underneath, you can add -v to show more logs. This can be chained a second time to show even more verbose logs: Some features of gomarkdoc rely on being able to detect information from the git repository containing the project. Since individual local git repositories may be configured differently from person to person, you may want to manually specify the information for the repository to remove any inconsistencies. This can be achieved with the --repository.url, --repository.default-branch and --repository.path options. For example, this repository would be configured with: If you want to reuse configuration options across multiple invocations, you can specify a file in the folder where you invoke gomarkdoc containing configuration information that you would otherwise provide on the command line. This file may be a JSON, TOML, YAML, HCL, env, or Java properties file, but the name is expected to start with .gomarkdoc (e.g. .gomarkdoc.yml). All configuration options are available with the camel-cased form of their long name (e.g. --include-unexported becomes includeUnexported). Template overrides are specified as a map, rather than a set of key-value pairs separated by =. Options provided on the command line override those provided in the configuration file if an option is present in both. While most users will find the command line utility sufficient for their needs, this package may also be used programmatically by installing it directly, rather than its command subpackage. The programmatic usage provides more flexibility when selecting what packages to work with and what components to generate documentation for. A common usage will look something like this: This project uses itself to generate the README files in github.com/princjef/gomarkdoc and its subdirectories. To see the commands that are run to generate documentation for this repository, take a look at the Doc() and DocVerify() functions in magefile.go and the .gomarkdoc.yml file in the root of this repository. To run these commands in your own project, simply replace `go run ./cmd/gomarkdoc` with `gomarkdoc`. Know of another project that is using gomarkdoc? Open an issue with a description of the project and link to the repository and it might be featured here!
Package badgerhold is an indexing and querying layer on top of a badger DB. The goal is to allow easy, persistent storage and retrieval of Go types. badgerDB is an embedded key-value store, and badgerhold serves a similar use case however with a higher level interface for common uses of Badger. BadgerHold deals directly with Go Types. When inserting data, you pass in your structure directly. When querying data you pass in a pointer to a slice of the type you want to return. By default Gob encoding is used. You can put multiple different types into the same DB file and they (and their indexes) will be stored separately. BadgerHold will automatically create an index for any struct fields tags with "badgerholdIndex" The first field specified in query will be used as the index (if one exists). Queries are chained together criteria that applies to a set of fields:
Package template implements data-driven templates for generating textual output. To generate HTML output, see package html/template, which has the same interface as this package but automatically secures HTML output against certain attacks. Templates are executed by applying them to a data structure. Annotations in the template refer to elements of the data structure (typically a field of a struct or a key in a map) to control execution and derive values to be displayed. Execution of the template walks the structure and sets the cursor, represented by a period '.' and called "dot", to the value at the current location in the structure as execution proceeds. The input text for a template is UTF-8-encoded text in any format. "Actions"--data evaluations or control structures--are delimited by "{{" and "}}"; all text outside actions is copied to the output unchanged. Actions may not span newlines, although comments can. Once parsed, a template may be executed safely in parallel. Here is a trivial example that prints "17 items are made of wool". More intricate examples appear below. Here is the list of actions. "Arguments" and "pipelines" are evaluations of data, defined in detail below. An argument is a simple value, denoted by one of the following. Arguments may evaluate to any type; if they are pointers the implementation automatically indirects to the base type when required. If an evaluation yields a function value, such as a function-valued field of a struct, the function is not invoked automatically, but it can be used as a truth value for an if action and the like. To invoke it, use the call function, defined below. A pipeline is a possibly chained sequence of "commands". A command is a simple value (argument) or a function or method call, possibly with multiple arguments: A pipeline may be "chained" by separating a sequence of commands with pipeline characters '|'. In a chained pipeline, the result of the each command is passed as the last argument of the following command. The output of the final command in the pipeline is the value of the pipeline. The output of a command will be either one value or two values, the second of which has type error. If that second value is present and evaluates to non-nil, execution terminates and the error is returned to the caller of Execute. A pipeline inside an action may initialize a variable to capture the result. The initialization has syntax where $variable is the name of the variable. An action that declares a variable produces no output. If a "range" action initializes a variable, the variable is set to the successive elements of the iteration. Also, a "range" may declare two variables, separated by a comma: in which case $index and $element are set to the successive values of the array/slice index or map key and element, respectively. Note that if there is only one variable, it is assigned the element; this is opposite to the convention in Go range clauses. A variable's scope extends to the "end" action of the control structure ("if", "with", or "range") in which it is declared, or to the end of the template if there is no such control structure. A template invocation does not inherit variables from the point of its invocation. When execution begins, $ is set to the data argument passed to Execute, that is, to the starting value of dot. Here are some example one-line templates demonstrating pipelines and variables. All produce the quoted word "output": During execution functions are found in two function maps: first in the template, then in the global function map. By default, no functions are defined in the template but the Funcs method can be used to add them. Predefined global functions are named as follows. The boolean functions take any zero value to be false and a non-zero value to be true. There is also a set of binary comparison operators defined as functions: For simpler multi-way equality tests, eq (only) accepts two or more arguments and compares the second and subsequent to the first, returning in effect (Unlike with || in Go, however, eq is a function call and all the arguments will be evaluated.) The comparison functions work on basic types only (or named basic types, such as "type Celsius float32"). They implement the Go rules for comparison of values, except that size and exact type are ignored, so any integer value, signed or unsigned, may be compared with any other integer value. (The arithmetic value is compared, not the bit pattern, so all negative integers are less than all unsigned integers.) However, as usual, one may not compare an int with a float32 and so on. Each template is named by a string specified when it is created. Also, each template is associated with zero or more other templates that it may invoke by name; such associations are transitive and form a name space of templates. A template may use a template invocation to instantiate another associated template; see the explanation of the "template" action above. The name must be that of a template associated with the template that contains the invocation. When parsing a template, another template may be defined and associated with the template being parsed. Template definitions must appear at the top level of the template, much like global variables in a Go program. The syntax of such definitions is to surround each template declaration with a "define" and "end" action. The define action names the template being created by providing a string constant. Here is a simple example: This defines two templates, T1 and T2, and a third T3 that invokes the other two when it is executed. Finally it invokes T3. If executed this template will produce the text By construction, a template may reside in only one association. If it's necessary to have a template addressable from multiple associations, the template definition must be parsed multiple times to create distinct *Template values, or must be copied with the Clone or AddParseTree method. Parse may be called multiple times to assemble the various associated templates; see the ParseFiles and ParseGlob functions and methods for simple ways to parse related templates stored in files. A template may be executed directly or through ExecuteTemplate, which executes an associated template identified by name. To invoke our example above, we might write, or to invoke a particular template explicitly by name,
Package paypal provides a wrapper to PayPal API (https://developer.paypal.com/webapps/developer/docs/api/). The first thing you do is to create a Client (you can select API base URL using paypal contants). Then you can get an access token from PayPal: After you have an access token you can call built-in functions to get data from PayPal. paypal will assign all responses to go structures.
Package uplink is the main entrypoint to interacting with Storj Labs' decentralized storage network. Sign up for an account on a Satellite today! https://storj.io/ The fundamental unit of access in the Storj Labs storage network is the Access Grant. An access grant is a serialized structure that is internally comprised of an API Key, a set of encryption key information, and information about which Storj Labs or Tardigrade network Satellite is responsible for the metadata. An access grant is always associated with exactly one Project on one Satellite. If you don't already have an access grant, you will need make an account on a Satellite, generate an API Key, and encapsulate that API Key with encryption information into an access grant. If you don't already have an account on a Satellite, first make one at https://storj.io/ and note the Satellite you choose (such as us1.storj.io, eu1.storj.io, etc). Then, make an API Key in the web interface. The first step to any project is to generate a restricted access grant with the minimal permissions that are needed. Access grants contains all encryption information and they should be restricted as much as possible. To make an access grant, you can create one using our Uplink CLI tool's 'share' subcommand (after setting up the Uplink CLI tool), or you can make one as follows: In the above example, 'serializedAccess' is a human-readable string that represents read-only access to just the "logs" bucket, and is only able to decrypt that one bucket thanks to hierarchical deterministic key derivation. Note: RequestAccessWithPassphrase is CPU-intensive, and your application's normal lifecycle should avoid it and use ParseAccess where possible instead. To revoke an access grant see the Project.RevokeAccess method. A common architecture for building applications is to have a single bucket for the entire application to store the objects of all users. In such architecture, it is of utmost importance to guarantee that users can access only their objects but not the objects of other users. This can be achieved by implementing an app-specific authentication service that generates an access grant for each user by restricting the main access grant of the application. This user-specific access grant is restricted to access the objects only within a specific key prefix defined for the user. When initialized, the authentication server creates the main application access grant with an empty passphrase as follows. The authentication service does not hold any encryption information about users, so the passphrase used to request the main application access grant does not matter. The encryption keys related to user objects will be overridden in a next step on the client-side. It is important that once set to a specific value, this passphrase never changes in the future. Therefore, the best practice is to use an empty passphrase. Whenever a user is authenticated, the authentication service generates the user-specific access grant as follows: The userID is something that uniquely identifies the users in the application and must never change. Along with the user access grant, the authentication service should return a user-specific salt. The salt must be always the same for this user. The salt size is 16-byte or 32-byte. Once the application receives the user-specific access grant and the user-specific salt from the authentication service, it has to override the encryption key in the access grant, so users can encrypt and decrypt their files with encryption keys derived from their passphrase. The user-specific access grant is now ready to use by the application. Once you have a valid access grant, you can open a Project with the access that access grant allows for. Projects allow you to manage buckets and objects within buckets. A bucket represents a collection of objects. You can upload, download, list, and delete objects of any size or shape. Objects within buckets are represented by keys, where keys can optionally be listed using the "/" delimiter. Note: Objects and object keys within buckets are end-to-end encrypted, but bucket names themselves are not encrypted, so the billing interface on the Satellite can show you bucket line items. Objects support a couple kilobytes of arbitrary key/value metadata, and arbitrary-size primary data streams with the ability to read at arbitrary offsets. If you want to access only a small subrange of the data you uploaded, you can use `uplink.DownloadOptions` to specify the download range. Listing objects returns an iterator that allows to walk through all the items:
Package badgerhold is an indexing and querying layer on top of a badger DB. The goal is to allow easy, persistent storage and retrieval of Go types. badgerDB is an embedded key-value store, and badgerhold serves a similar use case however with a higher level interface for common uses of Badger. BadgerHold deals directly with Go Types. When inserting data, you pass in your structure directly. When querying data you pass in a pointer to a slice of the type you want to return. By default Gob encoding is used. You can put multiple different types into the same DB file and they (and their indexes) will be stored separately. BadgerHold will automatically create an index for any struct fields tags with "badgerholdIndex" The first field specified in query will be used as the index (if one exists). Queries are chained together criteria that applies to a set of fields:
Package rdap implements a client for the Registration Data Access Protocol (RDAP). RDAP is a modern replacement for the text-based WHOIS (port 43) protocol. It provides registration data for domain names/IP addresses/AS numbers, and more, in a structured format. This client executes RDAP queries and returns the responses as Go values. Quick usage: The QueryDomain(), QueryAutnum(), and QueryIP() methods all provide full contact information, and timeout after 30s. Normal usage: As of June 2017, all five number registries (AFRINIC, ARIN, APNIC, LANIC, RIPE) run RDAP servers. A small number of TLDs (top level domains) support RDAP so far, listed on https://data.iana.org/rdap/dns.json. The RDAP protocol uses HTTP, with responses in a JSON format. A bootstrapping mechanism (http://data.iana.org/rdap/) is used to determine the server to query.