Package cli provides a framework to build command line applications in Go with most of the burden of arguments parsing and validation placed on the framework instead of the user. To create a new application, initialize an app with cli.App. Specify a name and a brief description for the application: To attach code to execute when the app is launched, assign a function to the Action field: To assign a version to the application, use Version method and specify the flags that will be used to invoke the version command: Finally, in the main func, call Run passing in the arguments for parsing: To add one or more command line options (also known as flags), use one of the short-form StringOpt, StringsOpt, IntOpt, IntsOpt, Float64Opt, Floats64Opt, or BoolOpt methods on App (or Cmd if adding flags to a command or a subcommand). For example, to add a boolean flag to the cp command that specifies recursive mode, use the following: or: The first version returns a new pointer to a bool value which will be populated when the app is run, whereas the second version will populate a pointer to an existing variable you specify. The option name(s) is a space separated list of names (without the dashes). The one letter names can then be called with a single dash (short option, -R), the others with two dashes (long options, --recursive). You also specify the default value for the option if it is not supplied by the user. The last parameter is the description to be shown in help messages. There is also a second set of methods on App called String, Strings, Int, Ints, and Bool, which accept a long-form struct of the type: cli.StringOpt, cli.StringsOpt, cli.IntOpt, cli.IntsOpt, cli.Float64Opt, cli.Floats64Opt, cli.BoolOpt. The struct describes the option and allows the use of additional features not available in the short-form methods described above: Or: The first version returns a new pointer to a value which will be populated when the app is run, whereas the second version will populate a pointer to an existing variable you specify. Two features, EnvVar and SetByUser, can be defined in the long-form struct method. EnvVar is a space separated list of environment variables used to initialize the option if a value is not provided by the user. When help messages are shown, the value of any environment variables will be displayed. SetByUser is a pointer to a boolean variable that is set to true if the user specified the value on the command line. This can be useful to determine if the value of the option was explicitly set by the user or set via the default value. You can only access the values stored in the pointers in the Action func, which is invoked after argument parsing has been completed. This precludes using the value of one option as the default value of another. On the command line, the following syntaxes are supported when specifying options. Boolean options: String, int and float options: Slice options (StringsOpt, IntsOpt, Floats64Opt) where option is repeated to accumulate values in a slice: To add one or more command line arguments (not prefixed by dashes), use one of the short-form StringArg, StringsArg, IntArg, IntsArg, Float64Arg, Floats64Arg, or BoolArg methods on App (or Cmd if adding arguments to a command or subcommand). For example, to add two string arguments to our cp command, use the following calls: Or: The first version returns a new pointer to a value which will be populated when the app is run, whereas the second version will populate a pointer to an existing variable you specify. You then specify the argument as will be displayed in help messages. Argument names must be specified as all uppercase. The next parameter is the default value for the argument if it is not supplied. And the last is the description to be shown in help messages. There is also a second set of methods on App called String, Strings, Int, Ints, Float64, Floats64 and Bool, which accept a long-form struct of the type: cli.StringArg, cli.StringsArg, cli.IntArg, cli.IntsArg, cli.BoolArg. The struct describes the arguments and allows the use of additional features not available in the short-form methods described above: Or: The first version returns a new pointer to a value which will be populated when the app is run, whereas the second version will populate a pointer to an existing variable you specify. Two features, EnvVar and SetByUser, can be defined in the long-form struct method. EnvVar is a space separated list of environment variables used to initialize the argument if a value is not provided by the user. When help messages are shown, the value of any environment variables will be displayed. SetByUser is a pointer to a boolean variable that is set to true if the user specified the value on the command line. This can be useful to determine if the value of the argument was explicitly set by the user or set via the default value. You can only access the values stored in the pointers in the Action func, which is invoked after argument parsing has been completed. This precludes using the value of one argument as the default value of another. The -- operator marks the end of command line options. Everything that follows will be treated as an argument, even if starts with a dash. For example, the standard POSIX touch command, which takes a filename as an argument (and possibly other options that we'll ignore here), could be defined as: If we try to create a file named "-f" via our touch command: It will fail because the -f will be parsed as an option, not as an argument. The fix is to insert -- after all flags have been specified, so the remaining arguments are parsed as arguments instead of options as follows: This ensures the -f is parsed as an argument instead of a flag named f. This package supports nesting of commands and subcommands. Declare a top-level command by calling the Command func on the top-level App struct. For example, the following creates an application called docker that will have one command called run: The first argument is the name of the command the user will specify on the command line to invoke this command. The second argument is the description of the command shown in help messages. And, the last argument is a CmdInitializer, which is a function that receives a pointer to a Cmd struct representing the command. Within this function, define the options and arguments for the command by calling the same methods as you would with top-level App struct (BoolOpt, StringArg, ...). To execute code when the command is invoked, assign a function to the Action field of the Cmd struct. Within that function, you can safely refer to the options and arguments as command line parsing will be completed at the time the function is invoked: Optionally, to provide a more extensive description of the command, assign a string to LongDesc, which is displayed when a user invokes --help. A LongDesc can be provided for Cmds as well as the top-level App: Subcommands can be added by calling Command on the Cmd struct. They can by defined to any depth if needed: Command and subcommand aliases are also supported. To define one or more aliases, specify a space-separated list of strings to the first argument of Command: With the command structure defined above, users can invoke the app in a variety of ways: Commands can be hidden in the help messages. This can prove useful to deprecate a command so that it does not appear to new users in the help, but still exists to not break existing scripts. To hide a command, set the Hidden field to true: As a convenience, to assign an Action to a func with no arguments, use ActionCommand when defining the Command. For example, the following two statements are equivalent: Please note that options, arguments, specs, and long descriptions cannot be provided when using ActionCommand. This is intended for very simple command invocations that take no arguments. Finally, as a side-note, it may seem a bit weird that this package uses a function to initialize a command instead of simply returning a command struct. The motivation behind this API decision is scoping: as with the standard flag package, adding an option or an argument returns a pointer to a value which will be populated when the app is run. Since you'll want to store these pointers in variables, and to avoid having dozens of them in the same scope (the main func for example or as global variables), this API was specifically tailored to take a func parameter (called CmdInitializer), which accepts the command struct. With this design, the command's specific variables are limited in scope to this function. Interceptors, or hooks, can be defined to be executed before and after a command or when any of its subcommands are executed. For example, the following app defines multiple commands as well as a global flag which toggles verbosity: Instead of duplicating the check for the verbose flag and setting the debug level in every command (and its sub-commands), a Before interceptor can be set on the top-level App instead: Whenever a valid command is called by the user, all the Before interceptors defined on the app and the intermediate commands will be called, in order from the root to the leaf. Similarly, to execute a hook after a command has been called, e.g. to cleanup resources allocated in Before interceptors, simply set the After field of the App struct or any other Command. After interceptors will be called, in order, from the leaf up to the root (the opposite order of the Before interceptors). The following diagram shows when and in which order multiple Before and After interceptors are executed: To exit the application, use cli.Exit function, which accepts an exit code and exits the app with the provided code. It is important to use cli.Exit instead of os.Exit as the former ensures that all of the After interceptors are executed before exiting. An App or Command's invocation syntax can be customized using spec strings. This can be useful to indicate that an argument is optional or that two options are mutually exclusive. The spec string is one of the key differentiators between this package and other CLI packages as it allows the developer to express usage in a simple, familiar, yet concise grammar. To define option and argument usage for the top-level App, assign a spec string to the App's Spec field: Likewise, to define option and argument usage for a command or subcommand, assign a spec string to the Command's Spec field: The spec syntax is mostly based on the conventions used in POSIX command line applications (help messages and man pages). This syntax is described in full below. If a user invokes the app or command with the incorrect syntax, the app terminates with a help message showing the proper invocation. The remainder of this section describes the many features and capabilities of the spec string grammar. Options can use both short and long option names in spec strings. In the example below, the option is mandatory and must be provided. Any options referenced in a spec string MUST be explicitly declared, otherwise this package will panic. I.e. for each item in the spec string, a corresponding *Opt or *Arg is required: Arguments are specified with all-uppercased words. In the example below, both SRC and DST must be provided by the user (two arguments). Like options, any argument referenced in a spec string MUST be explicitly declared, otherwise this package will panic: With the exception of options, the order of the elements in a spec string is respected and enforced when command line arguments are parsed. In the example below, consecutive options (-f and -g) are parsed regardless of the order they are specified (both "-f=5 -g=6" and "-g=6 -f=5" are valid). Order between options and arguments is significant (-f and -g must appear before the SRC argument). The same holds true for arguments, where SRC must appear before DST: Optionality of options and arguments is specified in a spec string by enclosing the item in square brackets []. If the user does not provide an optional value, the app will use the default value specified when the argument was defined. In the example below, if -x is not provided, heapSize will default to 1024: Choice between two or more items is specified in a spec string by separating each choice with the | operator. Choices are mutually exclusive. In the examples below, only a single choice can be provided by the user otherwise the app will terminate displaying a help message on proper usage: Repetition of options and arguments is specified in a spec string with the ... postfix operator to mark an item as repeatable. Both options and arguments support repitition. In the example below, users may invoke the command with multiple -e options and multiple SRC arguments: Grouping of options and arguments is specified in a spec string with parenthesis. When combined with the choice | and repetition ... operators, complex syntaxes can be created. The parenthesis in the example below indicate a repeatable sequence of a -e option followed by an argument, and that is mutually exclusive to a choice between -x and -y options. Option groups, or option folding, are a shorthand method to declaring a choice between multiple options. I.e. any combination of the listed options in any order with at least one option selected. The following two statements are equivalent: Option groups are typically used in conjunction with optionality [] operators. I.e. any combination of the listed options in any order or none at all. The following two statements are equivalent: All of the options can be specified using a special syntax: [OPTIONS]. This is a special token in the spec string (not optionality and not an argument called OPTIONS). It is equivalent to an optional repeatable choice between all the available options. For example, if an app or a command declares 4 options a, b, c and d, then the following two statements are equivalent: Inline option values are specified in the spec string with the =<some-text> notation immediately following an option (long or short form) to provide users with an inline description or value. The actual inline values are ignored by the spec parser as they exist only to provide a contextual hint to the user. In the example below, "absolute-path" and "in seconds" are ignored by the parser: The -- operator can be used to automatically treat everything following it as arguments. In other words, placing a -- in the spec string automatically inserts a -- in the same position in the program call arguments. This lets you write programs such as the POSIX time utility for example: Below is the full EBNF grammar for the Specs language: By combining a few of these building blocks together (while respecting the grammar above), powerful and sophisticated validation constraints can be created in a simple and concise manner without having to define in code. This is one of the key differentiators between this package and other CLI packages. Validation of usage is handled entirely by the package through the spec string. Behind the scenes, this package parses the spec string and constructs a finite state machine used to parse the command line arguments. It also handles backtracking, which allows it to handle tricky cases, or what I like to call "the cp test": Without backtracking, this deceptively simple spec string cannot be parsed correctly. For instance, docopt can't handle this case, whereas this package does. By default an auto-generated spec string is created for the app and every command unless a spec string has been set by the user. This can simplify use of the package even further for simple syntaxes. The following logic is used to create an auto-generated spec string: 1) start with an empty spec string, 2) if at least one option was declared, append "[OPTIONS]" to the spec string, and 3) for each declared argument, append it, in the order of declaration, to the spec string. For example, given this command declaration: The auto-generated spec string, which should suffice for simple cases, would be: If additional constraints are required, the spec string must be set explicitly using the grammar documented above. By default, the following types are supported for options and arguments: bool, string, int, float64, strings (slice of strings), ints (slice of ints) and floats64 (slice of float64). You can, however, extend this package to handle other types, e.g. time.Duration, float64, or even your own struct types. To define your own custom type, you must implement the flag.Value interface for your custom type, and then declare the option or argument using VarOpt or VarArg respectively if using the short-form methods. If using the long-form struct, then use Var instead. The following example defines a custom type for a duration. It defines a duration argument that users will be able to invoke with strings in the form of "1h31m42s": To make a custom type to behave as a boolean option, i.e. doesn't take a value, it must implement the IsBoolFlag method that returns true: To make a custom type behave as a multi-valued option or argument, i.e. takes multiple values, it must implement the Clear method, which is called whenever the values list needs to be cleared, e.g. when the value was initially populated from an environment variable, and then explicitly set from the CLI: To hide the default value of a custom type, it must implement the IsDefault method that returns a boolean. The help message generator will use the return value to decide whether or not to display the default value to users:
Package cbor is a modern CBOR codec (RFC 8949 & RFC 7049) with CBOR tags, Go struct tags (toarray/keyasint/omitempty), Core Deterministic Encoding, CTAP2, Canonical CBOR, float64->32->16, and duplicate map key detection. Encoding options allow "preferred serialization" by encoding integers and floats to their smallest forms (e.g. float16) when values fit. Struct tags like "keyasint", "toarray" and "omitempty" make CBOR data smaller and easier to use with structs. For example, "toarray" tag makes struct fields encode to CBOR array elements. And "keyasint" makes a field encode to an element of CBOR map with specified int key. Latest docs can be viewed at https://github.com/fxamacker/cbor#cbor-library-in-go The Quick Start guide is at https://github.com/fxamacker/cbor#quick-start Function signatures identical to encoding/json include: Standard interfaces include: Custom encoding and decoding is possible by implementing standard interfaces for user-defined Go types. Codec functions are available at package-level (using defaults options) or by creating modes from options at runtime. "Mode" in this API means definite way of encoding (EncMode) or decoding (DecMode). EncMode and DecMode interfaces are created from EncOptions or DecOptions structs. Modes use immutable options to avoid side-effects and simplify concurrency. Behavior of modes won't accidentally change at runtime after they're created. Modes are intended to be reused and are safe for concurrent use. EncMode and DecMode Interfaces Using Default Encoding Mode Using Default Decoding Mode Creating and Using Encoding Modes Predefined Encoding Options: https://github.com/fxamacker/cbor#predefined-encoding-options Encoding Options: https://github.com/fxamacker/cbor#encoding-options Decoding Options: https://github.com/fxamacker/cbor#decoding-options Struct tags like `cbor:"name,omitempty"` and `json:"name,omitempty"` work as expected. If both struct tags are specified then `cbor` is used. Struct tags like "keyasint", "toarray", and "omitempty" make it easy to use very compact formats like COSE and CWT (CBOR Web Tokens) with structs. For example, "toarray" makes struct fields encode to array elements. And "keyasint" makes struct fields encode to elements of CBOR map with int keys. https://raw.githubusercontent.com/fxamacker/images/master/cbor/v2.0.0/cbor_easy_api.png Struct tags are listed at https://github.com/fxamacker/cbor#struct-tags-1 Over 375 tests are included in this package. Cover-guided fuzzing is handled by a private fuzzer that replaced fxamacker/cbor-fuzz years ago.
Package hdkeychain provides an API for Decred hierarchical deterministic extended keys (based on BIP0032). The ability to implement hierarchical deterministic wallets depends on the ability to create and derive hierarchical deterministic extended keys. At a high level, this package provides support for those hierarchical deterministic extended keys by providing an ExtendedKey type and supporting functions. Each extended key can either be a private or public extended key which itself is capable of deriving a child extended key. Whether an extended key is a private or public extended key can be determined with the IsPrivate function. In order to create and sign transactions, or provide others with addresses to send funds to, the underlying key and address material must be accessible. This package provides the ECPubKey, ECPrivKey, and Address functions for this purpose. As previously mentioned, the extended keys are hierarchical meaning they are used to form a tree. The root of that tree is called the master node and this package provides the NewMaster function to create it from a cryptographically random seed. The GenerateSeed function is provided as a convenient way to create a random seed for use with the NewMaster function. Once you have created a tree root (or have deserialized an extended key as discussed later), the child extended keys can be derived by using the Child function. The Child function supports deriving both normal (non-hardened) and hardened child extended keys. In order to derive a hardened extended key, use the HardenedKeyStart constant + the hardened key number as the index to the Child function. This provides the ability to cascade the keys into a tree and hence generate the hierarchical deterministic key chains. A private extended key can be used to derive both hardened and non-hardened (normal) child private and public extended keys. A public extended key can only be used to derive non-hardened child public extended keys. As enumerated in BIP0032 "knowledge of the extended public key plus any non-hardened private key descending from it is equivalent to knowing the extended private key (and thus every private and public key descending from it). This means that extended public keys must be treated more carefully than regular public keys. It is also the reason for the existence of hardened keys, and why they are used for the account level in the tree. This way, a leak of an account-specific (or below) private key never risks compromising the master or other accounts." A private extended key can be converted to a new instance of the corresponding public extended key with the Neuter function. The original extended key is not modified. A public extended key is still capable of deriving non-hardened child public extended keys. Extended keys are serialized and deserialized with the String and NewKeyFromString functions. The serialized key is a Base58-encoded string which looks like the following: Extended keys are much like normal Decred addresses in that they have version bytes which tie them to a specific network. The SetNet and IsForNet functions are provided to set and determinine which network an extended key is associated with. This example demonstrates the audits use case in BIP0032. This example demonstrates the default hierarchical deterministic wallet layout as described in BIP0032. This example demonstrates how to generate a cryptographically random seed then use it to create a new master node (extended key).
Package dcrjson provides primitives for working with the Decred JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives to ease this process. In addition, it also provides some additional features such as custom command registration, command categorization, and reflection-based help generation. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with asynchronous transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. Based upon the discussion above, it should be easy to see how the types of this package map into the required parts of the protocol To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications, as well as the provided expected result types, to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package flag implements command-line flag parsing. okdaokddadok Usage: Define flags using flag.String(), Bool(), Int(), etc. This declares an integer flag, -flagname, stored in the pointer ip, with type *int. If you like, you can bind the flag to a variable using the Var() functions. Or you can create custom flags that satisfy the Value interface (with pointer receivers) and couple them to flag parsing by For such flags, the default value is just the initial value of the variable. After all flags are defined, call to parse the command line into the defined flags. Flags may then be used directly. If you're using the flags themselves, they are all pointers; if you bind to variables, they're values. After parsing, the arguments after the flag are available as the slice flag.Args() or individually as flag.Arg(i). The arguments are indexed from 0 through flag.NArg()-1. Command line flag syntax: One or two minus signs may be used; they are equivalent. The last form is not permitted for boolean flags because the meaning of the command will change if there is a file called 0, false, etc. You must use the -flag=false form to turn off a boolean flag. Flag parsing stops just before the first non-flag argument ("-" is a non-flag argument) or after the terminator "--". Integer flags accept 1234, 0664, 0x1234 and may be negative. Boolean flags may be 1, 0, t, f, true, false, TRUE, FALSE, True, False. Duration flags accept any input valid for time.ParseDuration. The default set of command-line flags is controlled by top-level functions. The FlagSet type allows one to define independent sets of flags, such as to implement subcommands in a command-line interface. The methods of FlagSet are analogous to the top-level functions for the command-line flag set.
A data pipeline processing engine. See the README for more complete examples and guides. Code Organization: The pipeline package provides an API for how nodes can be connected to form a pipeline. The individual implementations of each node exist in this kapacitor package. The reason for the separation is to keep the exported API from the pipeline package clean as it is consumed via the TICKscripts (a DSL for Kapacitor). Other Concepts: Stream vs Batch -- Use of the word 'stream' indicates data arrives a single data point at a time. Use of the word 'batch' indicates data arrives in sets or batches or data points. Task -- A task represents a concrete workload to perform. It consists of a pipeline and an identifying name. Basic CRUD operations can be performed on tasks. Task Master -- Responsible for executing a task in a specific environment. Replay -- Replays static datasets against tasks.
Package mailgun provides methods for interacting with the Mailgun API. It automates the HTTP request/response cycle, encodings, and other details needed by the API. This SDK lets you do everything the API lets you, in a more Go-friendly way. For further information please see the Mailgun documentation at http://documentation.mailgun.com/ This document includes a number of examples which illustrates some aspects of the GUI which might be misleading or confusing. All examples included are derived from an acceptance test. Note that every SDK function has a corresponding acceptance test, so if you don't find an example for a function you'd like to know more about, please check the acceptance sub-package for a corresponding test. Of course, contributions to the documentation are always welcome as well. Feel free to submit a pull request or open a Github issue if you cannot find an example to suit your needs. Many SDK functions consume a pair of parameters called limit and skip. These help control how much data Mailgun sends over the wire. Limit, as you'd expect, gives a count of the number of records you want to receive. Note that, at present, Mailgun imposes its own cap of 100, for all API endpoints. Skip indicates where in the data set you want to start receiving from. Mailgun defaults to the very beginning of the dataset if not specified explicitly. If you don't particularly care how much data you receive, you may specify DefaultLimit. If you similarly don't care about where the data starts, you may specify DefaultSkip. Functions which accept a limit and skip setting, in general, will also return a total count of the items returned. Note that this total count is not the total in the bundle returned by the call. You can determine that easily enough with Go's len() function. The total that you receive actually refers to the complete set of data on the server. This total may well exceed the size returned from the API. If this happens, you may find yourself needing to iterate over the dataset of interest. For example: Copyright (c) 2013-2014, Michael Banzon. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the names of Mailgun, Michael Banzon, nor the names of their contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decreds. Errors returned by this package are of type txscript.Error. This allows the caller to programmatically determine the specific error by examining the ErrorCode field of the type asserted txscript.Error while still providing rich error messages with contextual information. A convenience function named IsErrorCode is also provided to allow callers to easily check for a specific error code. See ErrorCode in the package documentation for a full list.
Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: https://github.com/BurntSushi/xgbutil This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it.
Package mempool provides a policy-enforced pool of unmined Decred transactions. A key responsibility of the Decred network is mining transactions – regular transactions and stake transactions – into blocks. In order to facilitate this, the mining process relies on having a readily-available source of transactions to include in a block that is being solved. At a high level, this package satisfies that requirement by providing an in-memory pool of fully validated transactions that can also optionally be further filtered based upon a configurable policy. The Policy configuration options has flags that control whether or not "standard" transactions and old votes are accepted into the mempool. In essence, a "standard" transaction is one that satisfies a fairly strict set of requirements that are largely intended to help provide fair use of the system to all users. It is important to note that what is considered to be a "standard" transaction changes over time as policy and consensus rules evolve. For some insight, at the time of this writing, an example of _some_ of the criteria that are required for a transaction to be considered standard are that it is of the most-recently supported version, finalized, does not exceed a specific size, and only consists of specific script forms. Since this package does not deal with other Decred specifics such as network communication and transaction relay, it returns a list of transactions that were accepted which gives the caller a high level of flexibility in how they want to proceed. Typically, this will involve things such as relaying the transactions to other peers on the network and notifying the mining process that new transactions are available. This package has intentionally been designed so it can be used as a standalone package for any projects needing the ability create an in-memory pool of Decred transactions that are not only valid by consensus rules, but also adhere to a configurable policy ## Feature Overview The following is a quick overview of the major features. It is not intended to be an exhaustive list. - Maintain a pool of fully validated transactions - Stake transaction support (ticket purchases, votes and revocations) - Orphan transaction support (transactions that spend from unknown outputs) - Configurable transaction acceptance policy - Additional metadata tracking for each transaction - Manual control of transaction removal Errors returned by this package are either the raw errors provided by underlying calls or of type mempool.RuleError. Since there are two classes of rules (mempool acceptance rules and blockchain (consensus) acceptance rules), the mempool.RuleError type contains a single Err field which will, in turn, either be a mempool.TxRuleError or a blockchain.RuleError. The first indicates a violation of mempool acceptance rules while the latter indicates a violation of consensus acceptance rules. This allows the caller to easily differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by type asserting the Err field to one of the aforementioned types and examining their underlying ErrorCode field.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decred. Errors returned by this package are of type txscript.ErrorKind wrapped by txscript.Error which has full support for the standard library errors.Is and errors.As functions. This allows the caller to programmatically determine the specific error while still providing rich error messages with contextual information. See the constants defined with ErrorKind in the package documentation for a full list.
Package hdkeychain provides an API for Decred hierarchical deterministic extended keys (based on BIP0032). The ability to implement hierarchical deterministic wallets depends on the ability to create and derive hierarchical deterministic extended keys. At a high level, this package provides support for those hierarchical deterministic extended keys by providing an ExtendedKey type and supporting functions. Each extended key can either be a private or public extended key which itself is capable of deriving a child extended key. Whether an extended key is a private or public extended key can be determined with the IsPrivate function. In order to create and sign transactions, or provide others with addresses to send funds to, the underlying key and address material must be accessible. This package provides the SerializedPubKey and SerializedPrivKey functions for this purpose. The caller may then create the desired address types. As previously mentioned, the extended keys are hierarchical meaning they are used to form a tree. The root of that tree is called the master node and this package provides the NewMaster function to create it from a cryptographically random seed. The GenerateSeed function is provided as a convenient way to create a random seed for use with the NewMaster function. Once you have created a tree root (or have deserialized an extended key as discussed later), the child extended keys can be derived by using either the Child or ChildBIP32Std function. The difference is described in the following section. These functions support deriving both normal (non-hardened) and hardened child extended keys. In order to derive a hardened extended key, use the HardenedKeyStart constant + the hardened key number as the index to the Child function. This provides the ability to cascade the keys into a tree and hence generate the hierarchical deterministic key chains. The Child function derives extended keys with a modified scheme based on BIP0032, whereas ChildBIP32Std produces keys that strictly conform to the standard. Specifically, the Decred variation strips leading zeros of a private key, causing subsequent child keys to differ from the keys expected by standard BIP0032. The ChildBIP32Std method retains leading zeros, ensuring the child keys expected by BIP0032 are derived. The Child function must be used for Decred wallet key derivation for legacy reasons. A private extended key can be used to derive both hardened and non-hardened (normal) child private and public extended keys. A public extended key can only be used to derive non-hardened child public extended keys. As enumerated in BIP0032 "knowledge of the extended public key plus any non-hardened private key descending from it is equivalent to knowing the extended private key (and thus every private and public key descending from it). This means that extended public keys must be treated more carefully than regular public keys. It is also the reason for the existence of hardened keys, and why they are used for the account level in the tree. This way, a leak of an account-specific (or below) private key never risks compromising the master or other accounts." A private extended key can be converted to a new instance of the corresponding public extended key with the Neuter function. The original extended key is not modified. A public extended key is still capable of deriving non-hardened child public extended keys. Extended keys are serialized and deserialized with the String and NewKeyFromString functions. The serialized key is a Base58-encoded string which looks like the following: Extended keys are much like normal Decred addresses in that they have version bytes which tie them to a specific network. The network that an extended key is associated with is specified when creating and decoding the key. In the case of decoding, an error will be returned if a given encoded extended key is not for the specified network. This example demonstrates the audits use case in BIP0032. This example demonstrates the default hierarchical deterministic wallet layout as described in BIP0032. This example demonstrates how to generate a cryptographically random seed then use it to create a new master node (extended key).
Package hdkeychain provides an API for Decred hierarchical deterministic extended keys (based on BIP0032). The ability to implement hierarchical deterministic wallets depends on the ability to create and derive hierarchical deterministic extended keys. At a high level, this package provides support for those hierarchical deterministic extended keys by providing an ExtendedKey type and supporting functions. Each extended key can either be a private or public extended key which itself is capable of deriving a child extended key. Whether an extended key is a private or public extended key can be determined with the IsPrivate function. In order to create and sign transactions, or provide others with addresses to send funds to, the underlying key and address material must be accessible. This package provides the ECPubKey and ECPrivKey functions for this purpose. The caller may then create the desired address types. As previously mentioned, the extended keys are hierarchical meaning they are used to form a tree. The root of that tree is called the master node and this package provides the NewMaster function to create it from a cryptographically random seed. The GenerateSeed function is provided as a convenient way to create a random seed for use with the NewMaster function. Once you have created a tree root (or have deserialized an extended key as discussed later), the child extended keys can be derived by using the Child function. The Child function supports deriving both normal (non-hardened) and hardened child extended keys. In order to derive a hardened extended key, use the HardenedKeyStart constant + the hardened key number as the index to the Child function. This provides the ability to cascade the keys into a tree and hence generate the hierarchical deterministic key chains. A private extended key can be used to derive both hardened and non-hardened (normal) child private and public extended keys. A public extended key can only be used to derive non-hardened child public extended keys. As enumerated in BIP0032 "knowledge of the extended public key plus any non-hardened private key descending from it is equivalent to knowing the extended private key (and thus every private and public key descending from it). This means that extended public keys must be treated more carefully than regular public keys. It is also the reason for the existence of hardened keys, and why they are used for the account level in the tree. This way, a leak of an account-specific (or below) private key never risks compromising the master or other accounts." A private extended key can be converted to a new instance of the corresponding public extended key with the Neuter function. The original extended key is not modified. A public extended key is still capable of deriving non-hardened child public extended keys. Extended keys are serialized and deserialized with the String and NewKeyFromString functions. The serialized key is a Base58-encoded string which looks like the following: Extended keys are much like normal Decred addresses in that they have version bytes which tie them to a specific network. The network that an extended key is associated with is specified when creating and decoding the key. In the case of decoding, an error will be returned if a given encoded extended key is not for the specified network. This example demonstrates the audits use case in BIP0032. This example demonstrates the default hierarchical deterministic wallet layout as described in BIP0032. This example demonstrates how to generate a cryptographically random seed then use it to create a new master node (extended key).
The mailgun package provides methods for interacting with the Mailgun API. It automates the HTTP request/response cycle, encodings, and other details needed by the API. This SDK lets you do everything the API lets you, in a more Go-friendly way. For further information please see the Mailgun documentation at http://documentation.mailgun.com/ This document includes a number of examples which illustrates some aspects of the GUI which might be misleading or confusing. All examples included are derived from an acceptance test. Note that every SDK function has a corresponding acceptance test, so if you don't find an example for a function you'd like to know more about, please check the acceptance sub-package for a corresponding test. Of course, contributions to the documentation are always welcome as well. Feel free to submit a pull request or open a Github issue if you cannot find an example to suit your needs. Many SDK functions consume a pair of parameters called limit and skip. These help control how much data Mailgun sends over the wire. Limit, as you'd expect, gives a count of the number of records you want to receive. Note that, at present, Mailgun imposes its own cap of 100, for all API endpoints. Skip indicates where in the data set you want to start receiving from. Mailgun defaults to the very beginning of the dataset if not specified explicitly. If you don't particularly care how much data you receive, you may specify DefaultLimit. If you similarly don't care about where the data starts, you may specify DefaultSkip. Functions which accept a limit and skip setting, in general, will also return a total count of the items returned. Note that this total count is not the total in the bundle returned by the call. You can determine that easily enough with Go's len() function. The total that you receive actually refers to the complete set of data on the server. This total may well exceed the size returned from the API. If this happens, you may find yourself needing to iterate over the dataset of interest. For example: Copyright (c) 2013-2014, Michael Banzon. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the names of Mailgun, Michael Banzon, nor the names of their contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Amino is an encoding library that can handle interfaces (like protobuf "oneof") well. This is achieved by prefixing bytes before each "concrete type". A concrete type is some non-interface value (generally a struct) which implements the interface to be (de)serialized. Not all structures need to be registered as concrete types -- only when they will be stored in interface type fields (or interface type slices) do they need to be registered. All interfaces and the concrete types that implement them must be registered. Notice that an interface is represented by a nil pointer. Structures that must be deserialized as pointer values must be registered with a pointer value as well. It's OK to (de)serialize such structures in non-pointer (value) form, but when deserializing such structures into an interface field, they will always be deserialized as pointers. All registered concrete types are encoded with leading 4 bytes (called "prefix bytes"), even when it's not held in an interface field/element. In this way, Amino ensures that concrete types (almost) always have the same canonical representation. The first byte of the prefix bytes must not be a zero byte, so there are 2**(8*4)-2**(8*3) possible values. When there are 4096 types registered at once, the probability of there being a conflict is ~ 0.2%. See https://instacalc.com/51189 for estimation. This is assuming that all registered concrete types have unique natural names (e.g. prefixed by a unique entity name such as "com.tendermint/", and not "mined/grinded" to produce a particular sequence of "prefix bytes"). TODO Update instacalc.com link with 255/256 since 0x00 is an escape. Do not mine/grind to produce a particular sequence of prefix bytes, and avoid using dependencies that do so. Since 4 bytes are not sufficient to ensure no conflicts, sometimes it is necessary to prepend more than the 4 prefix bytes for disambiguation. Like the prefix bytes, the disambiguation bytes are also computed from the registered name of the concrete type. There are 3 disambiguation bytes, and in binary form they always precede the prefix bytes. The first byte of the disambiguation bytes must not be a zero byte, so there are 2**(8*3)-2**(8*2) possible values. The prefix bytes never start with a zero byte, so the disambiguation bytes are escaped with 0x00. Notice that the 4 prefix bytes always immediately precede the binary encoding of the concrete type. To compute the disambiguation bytes, we take `hash := sha256(concreteTypeName)`, and drop the leading 0x00 bytes. In the example above, hash has two leading 0x00 bytes, so we drop them. The first 3 bytes are called the "disambiguation bytes" (in angle brackets). The next 4 bytes are called the "prefix bytes" (in square brackets).
Package mailgun provides methods for interacting with the Mailgun API. It automates the HTTP request/response cycle, encodings, and other details needed by the API. This SDK lets you do everything the API lets you, in a more Go-friendly way. For further information please see the Mailgun documentation at http://documentation.mailgun.com/ All functions and method have a corresponding test, so if you don't find an example for a function you'd like to know more about, please check for a corresponding test. Of course, contributions to the documentation are always welcome as well. Feel free to submit a pull request or open a Github issue if you cannot find an example to suit your needs. Most methods that begin with `List` return an iterator which simplfies paging through large result sets returned by the mailgun API. Most `List` methods allow you to specify a `Limit` parameter which as you'd expect, limits the number of items returned per page. Note that, at present, Mailgun imposes its own cap of 100 items per page, for all API endpoints. For example, the following iterates over all pages of events 100 items at a time Copyright (c) 2013-2019, Michael Banzon. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the names of Mailgun, Michael Banzon, nor the names of their contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decred. The errors returned by this package are of type txscript.ErrorKind wrapped by txscript.Error which has full support for the standard library errors.Is and errors.As functions. This allows the caller to programmatically determine the specific error while still providing rich error messages with contextual information. See the constants defined with ErrorKind in the package documentation for a full list.
Package form Decodes url.Values into Go value(s) and Encodes Go value(s) into url.Values. It has the following features: Questions out of the box supported types string bool int, int8, int16, int32, int64 uint, uint8, uint16, uint32, uint64 float32, float64 struct and anonymous struct interface{} time.Time` - by default using RFC3339 a `pointer` to one of the above types slice, array map `custom types` can override any of the above types many other types may be supported inherently (eg. bson.ObjectId is type ObjectId string, which will get populated by the string type **NOTE**: map, struct and slice nesting are ad infinitum. symbols html example decoding the above HTML example encoding Decoder Encoder you can tell form to ignore fields using `-` in the tag you can tell form to omit empty fields using `,omitempty` or `FieldName,omitempty` in the tag To maximize compatibility with other systems the Encoder attempts to avoid using array indexes in url.Values if at all possible.
CBSD 3-Clause License Copyright (c) 2017-2022, Gerasimos (Makis) Maropoulos (kataras2006@hotmail.com) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. /* Package golog provides an easy to use foundation for your logging operations. Source code and other details for the project are available at GitHub: 0.1.12 The only requirement is the Go Programming Language Example code: Golog has a default, package-level initialized instance for you, however you can choose to create and use a logger instance for a specific part of your application. Example Code: Golog sets colors to levels when its `Printer.Output` is actual a compatible terminal which can renders colors, otherwise it will downgrade itself to a white foreground. Golog has functions to print a formatted log too. Example Code: Golog takes a simple `io.Writer` as its underline Printer's Output. Example Code: You can even override the default line braker, "\n", by using the `golog#NewLine` function at startup. Example Code: Golog is a leveled logger, therefore you can set a level and print whenever the print level is valid with the set-ed one. Available built'n levels are: Below you'll learn a way to add a custom level or modify an existing level. The default colorful text(or raw text for unsupported outputs) for levels can be overridden by using the `golog#ErrorText, golog#WarnText, golog#InfoText and golog#DebugText` functions. Example Code: Golog gives you the power to add or modify existing levels is via Level Metadata. Example Code: The logger's level can be changed via passing one of the level constants to the `Level` field or by passing its string representation to the `SetLevel` function. Example Code: Transaction with your favorite, but deprecated logger is easy. Golog offers two basic interfaces, the `ExternalLogger` and the `StdLogger` that can be directly used as arguments to the `Install` function in order to adapt an external logger. Outline: Example Code: Example Code: But you should have a basic idea of the golog package by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples:
Package dcrjson provides primitives for working with the Decred JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives to ease this process. In addition, it also provides some additional features such as custom command registration, command categorization, and reflection-based help generation. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with asynchronous transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. Based upon the discussion above, it should be easy to see how the types of this package map into the required parts of the protocol To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications, as well as the provided expected result types, to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package restic gives a (very brief) introduction to the structure of source code. The packages are structured so that cmd/ contains the main package for the restic binary, and internal/ contains almost all code in library form. We've chosen to use the internal/ path so that the packages cannot be imported by other programs. This was done on purpose, at the moment restic is a command-line program and not a library. This may be revisited at a later point in time.
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: net/http valyala/fasthttp Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorKind field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package form implements encoding and decoding of application/x-www-form-urlencoded data.
Package networkfirewall provides the API client, operations, and parameter types for AWS Network Firewall. This is the API Reference for Network Firewall. This guide is for developers who need detailed information about the Network Firewall API actions, data types, and errors. To access Network Firewall using the REST API endpoint: Network Firewall is a stateful, managed, network firewall and intrusion detection and prevention service for Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at the perimeter of your VPC. This includes filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or Direct Connect. Network Firewall uses rules that are compatible with Suricata, a free, open source network analysis and threat detection engine. You can use Network Firewall to monitor and protect your VPC traffic in a number of ways. The following are just a few examples: Allow domains or IP addresses for known Amazon Web Services service endpoints, such as Amazon S3, and block all other forms of traffic. Use custom lists of known bad domains to limit the types of domain names that your applications can access. Perform deep packet inspection on traffic entering or leaving your VPC. Use stateful protocol detection to filter protocols like HTTPS, regardless of the port used. To enable Network Firewall for your VPCs, you perform steps in both Amazon VPC and in Network Firewall. For information about using Amazon VPC, see Amazon VPC User Guide. To start using Network Firewall, do the following: (Optional) If you don't already have a VPC that you want to protect, create it in Amazon VPC. In Amazon VPC, in each Availability Zone where you want to have a firewall endpoint, create a subnet for the sole use of Network Firewall. In Network Firewall, create stateless and stateful rule groups, to define the components of the network traffic filtering behavior that you want your firewall to have. In Network Firewall, create a firewall policy that uses your rule groups and specifies additional default traffic filtering behavior. In Network Firewall, create a firewall and specify your new firewall policy and VPC subnets. Network Firewall creates a firewall endpoint in each subnet that you specify, with the behavior that's defined in the firewall policy. In Amazon VPC, use ingress routing enhancements to route traffic through the new firewall endpoints.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. If you currently use the $GOPATH scheme, install the package with the following command. To test the installation, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package form Decodes url.Values into Go value(s) and Encodes Go value(s) into url.Values. It has the following features: Questions out of the box supported types string bool int, int8, int16, int32, int64 uint, uint8, uint16, uint32, uint64 float32, float64 struct and anonymous struct interface{} time.Time` - by default using RFC3339 a `pointer` to one of the above types slice, array map `custom types` can override any of the above types many other types may be supported inherently (eg. bson.ObjectId is type ObjectId string, which will get populated by the string type **NOTE**: map, struct and slice nesting are ad infinitum. symbols html example decoding the above HTML example encoding Decoder Encoder you can tell form to ignore fields using `-` in the tag you can tell form to omit empty fields using `,omitempty` or `FieldName,omitempty` in the tag To maximize compatibility with other systems the Encoder attempts to avoid using array indexes in url.Values if at all possible.
Package apigatewaymanagementapi provides the API client, operations, and parameter types for AmazonApiGatewayManagementApi. The Amazon API Gateway Management API allows you to directly manage runtime aspects of your deployed APIs. To use it, you must explicitly set the SDK's endpoint to point to the endpoint of your deployed API. The endpoint will be of the form https://{api-id}.execute-api.{region}.amazonaws.com/{stage}, or will be the endpoint corresponding to your API's custom domain and base path, if applicable.
Package gofight offers simple API http handler testing for Golang framework. Details about the gofight project are found in github page: Installation: Set Header: You can add custom header via SetHeader func. Set Cookie: You can add custom cookie via SetCookie func. Set query string: Using SetQuery to generate query string data. POST FORM Data: Using SetForm to generate form data. POST JSON Data: Using SetJSON to generate json data. POST RAW Data: Using SetBody to generate raw data. For more details, see the documentation and example.
Package httpexpect helps with end-to-end HTTP and REST API testing. See example directory: There are two common ways to test API with httpexpect: The second approach works only if the server is a Go module and its handler can be imported in tests. Concrete behaviour is determined by Client implementation passed to Config struct. If you're using http.Client, set its Transport field (http.RoundTriper) to one of the following: Note that http handler can be usually obtained from http framework you're using. E.g., echo framework provides either http.Handler or fasthttp.RequestHandler. You can also provide your own implementation of RequestFactory (creates http.Request), or Client (gets http.Request and returns http.Response). If you're starting server from tests, it's very handy to use net/http/httptest. Whenever values are checked for equality in httpexpect, they are converted to "canonical form": This is equivalent to subsequently json.Marshal() and json.Unmarshal() the value and currently is implemented so. When some check fails, failure is reported. If non-fatal failures are used (see Reporter interface), execution is continued and instance that was checked is marked as failed. If specific instance is marked as failed, all subsequent checks are ignored for this instance and for any child instances retrieved after failure. Example:
Package redisc implements a redis cluster client on top of the redigo client package. It supports all commands that can be executed on a redis cluster, including pub-sub, scripts and read-only connections to read data from replicas. See http://redis.io/topics/cluster-spec for details. The package defines two main types: Cluster and Conn. Both are described in more details below, but the Cluster manages the mapping of keys (or more exactly, hash slots computed from keys) to a group of nodes that form a redis cluster, and a Conn manages a connection to this cluster. The package is designed such that for simple uses, or when keys have been carefully named to play well with a redis cluster, a Cluster value can be used as a drop-in replacement for a redis.Pool from the redigo package. Similarly, the Conn type implements redigo's redis.Conn interface (and the augmented redis.ConnWithTimeout one too), so the API to execute commands is the same - in fact the redisc package uses the redigo package as its only third-party dependency. When more control is needed, the package offers some extra behaviour specific to working with a redis cluster: Slot and SplitBySlot functions to compute the slot for a given key and to split a list of keys into groups of keys from the same slot, so that each group can safely be handled using the same connection. *Conn.Bind (or the BindConn package-level helper function) to explicitly specify the keys that will be used with the connection so that the right node is selected, instead of relying on the automatic detection based on the first parameter of the command. *Conn.ReadOnly (or the ReadOnlyConn package-level helper function) to mark a connection as read-only, allowing commands to be served by a replica instead of the master. RetryConn to wrap a connection into one that automatically follows redirections when the cluster moves slots around. Helper functions to deal with cluster-specific errors. The Cluster type manages a redis cluster and offers an interface compatible with redigo's redis.Pool: Along with some additional methods specific to a cluster: If the CreatePool function field is set, then a redis.Pool is created to manage connections to each of the cluster's nodes. A call to Get returns a connection from that pool. The Dial method, on the other hand, guarantees that the returned connection will not be managed by a pool, even if CreatePool is set. It calls redigo's redis.Dial function to create the unpooled connection, passing along any DialOptions set on the cluster. If the cluster's CreatePool field is nil, Get behaves the same as Dial. The Refresh method refreshes the cluster's internal mapping of hash slots to nodes. It should typically be called only once, after the cluster is created and before it is used, so that the first connections already benefit from smart routing. It is automatically kept up-to-date based on the redis MOVED responses afterwards. The EachNode method visits each node in the cluster and calls the provided function with a connection to that node, which may be useful to run diagnostics commands on each node or to collect keys across the whole cluster. The Stats method returns the pool statistics for each node, with the node's address as key of the map. A cluster must be closed once it is no longer used to release its resources. The connection returned from Get or Dial is a redigo redis.Conn interface (that also implements redis.ConnWithTimeout), with a concrete type of *Conn. In addition to the interface's required methods, *Conn adds the following methods: The returned connection is not yet connected to any node; it is "bound" to a specific node only when a call to Do, Send, Receive or Bind is made. For Do, Send and Receive, the node selection is implicit, it uses the first parameter of the command, and computes the hash slot assuming that first parameter is a key. It then binds the connection to the node corresponding to that slot. If there are no parameters for the command, or if there is no command (e.g. in a call to Receive), a random node is selected. Bind is explicit, it gives control to the caller over which node to select by specifying a list of keys that the caller wishes to handle with the connection. All keys must belong to the same slot, and the connection must not already be bound to a node, otherwise an error is returned. On success, the connection is bound to the node holding the slot of the specified key(s). Because the connection is returned as a redis.Conn interface, a type assertion must be used to access the underlying *Conn and to be able to call Bind: The BindConn package-level function is provided as a helper for this common use-case. The ReadOnly method marks the connection as read-only, meaning that it will attempt to connect to a replica instead of the master node for its slot. Once bound to a node, the READONLY redis command is sent automatically, so it doesn't have to be sent explicitly before use. ReadOnly must be called before the connection is bound to a node, otherwise an error is returned. For the same reason as for Bind, a type assertion must be used to call ReadOnly on a *Conn, so a package-level helper function is also provided, ReadOnlyConn. There is no ReadWrite method, because it can be sent as a normal redis command and will essentially end that connection (all commands will now return MOVED errors). If the connection was wrapped in a RetryConn call, then it will automatically follow the redirection to the master node (see the Redirections section). The connection must be closed after use, to release the underlying resources. The redis cluster may return MOVED and ASK errors when the node that received the command doesn't currently hold the slot corresponding to the key. The package cannot reliably handle those redirections automatically because the redirection error may be returned for a pipeline of commands, some of which may have succeeded. However, a connection can be wrapped by a call to RetryConn, which returns a redis.Conn interface where only calls to Do, Close and Err can succeed. That means pipelining is not supported, and only a single command can be executed at a time, but it will automatically handle MOVED and ASK replies, as well as TRYAGAIN errors. Note that even if RetryConn is not used, the cluster always updates its mapping of slots to nodes automatically by keeping track of MOVED replies. The concurrency model is similar to that of the redigo package: Cluster methods are safe to call concurrently (like redis.Pool). Connections do not support concurrent calls to write methods (Send, Flush) or concurrent calls to the read method (Receive). Connections do allow a concurrent reader and writer. Because the Do method combines the functionality of Send, Flush and Receive, it cannot be called concurrently with other methods. The Bind and ReadOnly methods are safe to call concurrently, but there is not much point in doing so for as both will fail if the connection is already bound. Create and use a cluster.
Package grip provides a flexible logging package for basic Go programs. Drawing inspiration from Go and Python's standard library logging, as well as systemd's journal service, and other logging systems, Grip provides a number of very powerful logging abstractions in one high-level package. The central type of the grip package is the Journaler type, instances of which provide distinct log capturing system. For ease, following from the Go standard library, the grip package provides parallel public methods that use an internal "standard" Jouernaler instance in the grip package, which has some defaults configured and may be sufficient for many use cases. The send.Sender interface provides a way of changing the logging backend, and the send package provides a number of alternate implementations of logging systems, including: systemd's journal, logging to standard output, logging to a file, and generic syslog support. The message.Composer interface is the representation of all messages. They are implemented to provide a raw structured form as well as a string representation for more conentional logging output. Furthermore they are intended to be easy to produce, and defer more expensive processing until they're being logged, to prevent expensive operations producing messages that are below threshold. Loging helpers exist for the following levels: These methods accept both strings (message content,) or types that implement the message.MessageComposer interface. Composer types make it possible to delay generating a message unless the logger is over the logging threshold. Use this to avoid expensive serialization operations for suppressed logging operations. All levels also have additional methods with `ln` and `f` appended to the end of the method name which allow Println() and Printf() style functionality. You must pass printf/println-style arguments to these methods. The Conditional logging methods take two arguments, a Boolean, and a message argument. Messages can be strings, objects that implement the MessageComposer interface, or errors. If condition boolean is true, the threshold level is met, and the message to log is not an empty string, then it logs the resolved message. Use conditional logging methods to potentially suppress log messages based on situations orthogonal to log level, with "log sometimes" or "log rarely" semantics. Combine with MessageComposers to to avoid expensive message building operations. The MutiCatcher type makes it possible to collect from a group of operations and then aggregate them as a single error.
Package cbor provides a fuzz-tested CBOR encoder and decoder with full support for float16, Canonical CBOR, CTAP2 Canonical CBOR, and custom settings. https://github.com/fxamacker/cbor/releases Encoding options allow "preferred serialization" by encoding integers and floats to their smallest forms (like float16) when values fit. Go struct tags like `cbor:"name,omitempty"` and `json:"name,omitempty"` work as expected. If both struct tags are specified then `cbor` is used. Struct tags like "keyasint", "toarray", and "omitempty" make it easy to use very compact formats like COSE and CWT (CBOR Web Tokens) with structs. For example, the "toarray" struct tag encodes/decodes struct fields as array elements. And "keyasint" struct tag encodes/decodes struct fields to values of maps with specified int keys. fxamacker/cbor-fuzz provides coverage-guided fuzzing for this package. For latest API docs, see: https://github.com/fxamacker/cbor#api
Code generated for package main by go-bindata DO NOT EDIT. (@generated) sources: sample-bchd.conf bchd is a full-node bitcoin cash implementation written in Go. The default options are sane for most users. This means bchd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when bchd starts up. By default, the configuration file is located at ~/.bchd/bchd.conf on POSIX-style operating systems and %LOCALAPPDATA%\bchd\bchd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package pointer implements Andersen's analysis, an inclusion-based pointer analysis algorithm first described in (Andersen, 1994). A pointer analysis relates every pointer expression in a whole program to the set of memory locations to which it might point. This information can be used to construct a call graph of the program that precisely represents the destinations of dynamic function and method calls. It can also be used to determine, for example, which pairs of channel operations operate on the same channel. The package allows the client to request a set of expressions of interest for which the points-to information will be returned once the analysis is complete. In addition, the client may request that a callgraph is constructed. The example program in example_test.go demonstrates both of these features. Clients should not request more information than they need since it may increase the cost of the analysis significantly. Our algorithm is INCLUSION-BASED: the points-to sets for x and y will be related by pts(y) ⊇ pts(x) if the program contains the statement y = x. It is FLOW-INSENSITIVE: it ignores all control flow constructs and the order of statements in a program. It is therefore a "MAY ALIAS" analysis: its facts are of the form "P may/may not point to L", not "P must point to L". It is FIELD-SENSITIVE: it builds separate points-to sets for distinct fields, such as x and y in struct { x, y *int }. It is mostly CONTEXT-INSENSITIVE: most functions are analyzed once, so values can flow in at one call to the function and return out at another. Only some smaller functions are analyzed with consideration of their calling context. It has a CONTEXT-SENSITIVE HEAP: objects are named by both allocation site and context, so the objects returned by two distinct calls to f: are distinguished up to the limits of the calling context. It is a WHOLE PROGRAM analysis: it requires SSA-form IR for the complete Go program and summaries for native code. See the (Hind, PASTE'01) survey paper for an explanation of these terms. The analysis is fully sound when invoked on pure Go programs that do not use reflection or unsafe.Pointer conversions. In other words, if there is any possible execution of the program in which pointer P may point to object O, the analysis will report that fact. By default, the "reflect" library is ignored by the analysis, as if all its functions were no-ops, but if the client enables the Reflection flag, the analysis will make a reasonable attempt to model the effects of calls into this library. However, this comes at a significant performance cost, and not all features of that library are yet implemented. In addition, some simplifying approximations must be made to ensure that the analysis terminates; for example, reflection can be used to construct an infinite set of types and values of those types, but the analysis arbitrarily bounds the depth of such types. Most but not all reflection operations are supported. In particular, addressable reflect.Values are not yet implemented, so operations such as (reflect.Value).Set have no analytic effect. The pointer analysis makes no attempt to understand aliasing between the operand x and result y of an unsafe.Pointer conversion: It is as if the conversion allocated an entirely new object: The analysis cannot model the aliasing effects of functions written in languages other than Go, such as runtime intrinsics in C or assembly, or code accessed via cgo. The result is as if such functions are no-ops. However, various important intrinsics are understood by the analysis, along with built-ins such as append. The analysis currently provides no way for users to specify the aliasing effects of native code. ------------------------------------------------------------------------ The remaining documentation is intended for package maintainers and pointer analysis specialists. Maintainers should have a solid understanding of the referenced papers (especially those by H&L and PKH) before making making significant changes. The implementation is similar to that described in (Pearce et al, PASTE'04). Unlike many algorithms which interleave constraint generation and solving, constructing the callgraph as they go, this implementation for the most part observes a phase ordering (generation before solving), with only simple (copy) constraints being generated during solving. (The exception is reflection, which creates various constraints during solving as new types flow to reflect.Value operations.) This improves the traction of presolver optimisations, but imposes certain restrictions, e.g. potential context sensitivity is limited since all variants must be created a priori. A type is said to be "pointer-like" if it is a reference to an object. Pointer-like types include pointers and also interfaces, maps, channels, functions and slices. We occasionally use C's x->f notation to distinguish the case where x is a struct pointer from x.f where is a struct value. Pointer analysis literature (and our comments) often uses the notation dst=*src+offset to mean something different than what it means in Go. It means: for each node index p in pts(src), the node index p+offset is in pts(dst). Similarly *dst+offset=src is used for store constraints and dst=src+offset for offset-address constraints. Nodes are the key datastructure of the analysis, and have a dual role: they represent both constraint variables (equivalence classes of pointers) and members of points-to sets (things that can be pointed at, i.e. "labels"). Nodes are naturally numbered. The numbering enables compact representations of sets of nodes such as bitvectors (or BDDs); and the ordering enables a very cheap way to group related nodes together. For example, passing n parameters consists of generating n parallel constraints from caller+i to callee+i for 0<=i<n. The zero nodeid means "not a pointer". For simplicity, we generate flow constraints even for non-pointer types such as int. The pointer equivalence (PE) presolver optimization detects which variables cannot point to anything; this includes not only all variables of non-pointer types (such as int) but also variables of pointer-like types if they are always nil, or are parameters to a function that is never called. Each node represents a scalar part of a value or object. Aggregate types (structs, tuples, arrays) are recursively flattened out into a sequential list of scalar component types, and all the elements of an array are represented by a single node. (The flattening of a basic type is a list containing a single node.) Nodes are connected into a graph with various kinds of labelled edges: simple edges (or copy constraints) represent value flow. Complex edges (load, store, etc) trigger the creation of new simple edges during the solving phase. Conceptually, an "object" is a contiguous sequence of nodes denoting an addressable location: something that a pointer can point to. The first node of an object has a non-nil obj field containing information about the allocation: its size, context, and ssa.Value. Objects include: Many objects have no Go types. For example, the func, map and chan type kinds in Go are all varieties of pointers, but their respective objects are actual functions (executable code), maps (hash tables), and channels (synchronized queues). Given the way we model interfaces, they too are pointers to "tagged" objects with no Go type. And an *ssa.Global denotes the address of a global variable, but the object for a Global is the actual data. So, the types of an ssa.Value that creates an object is "off by one indirection": a pointer to the object. The individual nodes of an object are sometimes referred to as "labels". For uniformity, all objects have a non-zero number of fields, even those of the empty type struct{}. (All arrays are treated as if of length 1, so there are no empty arrays. The empty tuple is never address-taken, so is never an object.) An tagged object has the following layout: The T node's typ field is the dynamic type of the "payload": the value v which follows, flattened out. The T node's obj has the otTagged flag. Tagged objects are needed when generalizing across types: interfaces, reflect.Values, reflect.Types. Each of these three types is modelled as a pointer that exclusively points to tagged objects. Tagged objects may be indirect (obj.flags ⊇ {otIndirect}) meaning that the value v is not of type T but *T; this is used only for reflect.Values that represent lvalues. (These are not implemented yet.) Variables of the following "scalar" types may be represented by a single node: basic types, pointers, channels, maps, slices, 'func' pointers, interfaces. Pointers: Nothing to say here, oddly. Basic types (bool, string, numbers, unsafe.Pointer): Currently all fields in the flattening of a type, including non-pointer basic types such as int, are represented in objects and values. Though non-pointer nodes within values are uninteresting, non-pointer nodes in objects may be useful (if address-taken) because they permit the analysis to deduce, in this example, that p points to s.x. If we ignored such object fields, we could only say that p points somewhere within s. All other basic types are ignored. Expressions of these types have zero nodeid, and fields of these types within aggregate other types are omitted. unsafe.Pointers are not modelled as pointers, so a conversion of an unsafe.Pointer to *T is (unsoundly) treated equivalent to new(T). Channels: An expression of type 'chan T' is a kind of pointer that points exclusively to channel objects, i.e. objects created by MakeChan (or reflection). 'chan T' is treated like *T. *ssa.MakeChan is treated as equivalent to new(T). *ssa.Send and receive (*ssa.UnOp(ARROW)) and are equivalent to store Maps: An expression of type 'map[K]V' is a kind of pointer that points exclusively to map objects, i.e. objects created by MakeMap (or reflection). map K[V] is treated like *M where M = struct{k K; v V}. *ssa.MakeMap is equivalent to new(M). *ssa.MapUpdate is equivalent to *y=x where *y and x have type M. *ssa.Lookup is equivalent to y=x.v where x has type *M. Slices: A slice []T, which dynamically resembles a struct{array *T, len, cap int}, is treated as if it were just a *T pointer; the len and cap fields are ignored. *ssa.MakeSlice is treated like new([1]T): an allocation of a *ssa.Index on a slice is equivalent to a load. *ssa.IndexAddr on a slice returns the address of the sole element of the slice, i.e. the same address. *ssa.Slice is treated as a simple copy. Functions: An expression of type 'func...' is a kind of pointer that points exclusively to function objects. A function object has the following layout: There may be multiple function objects for the same *ssa.Function due to context-sensitive treatment of some functions. The first node is the function's identity node. Associated with every callsite is a special "targets" variable, whose pts() contains the identity node of each function to which the call may dispatch. Identity words are not otherwise used during the analysis, but we construct the call graph from the pts() solution for such nodes. The following block of contiguous nodes represents the flattened-out types of the parameters ("P-block") and results ("R-block") of the function object. The treatment of free variables of closures (*ssa.FreeVar) is like that of global variables; it is not context-sensitive. *ssa.MakeClosure instructions create copy edges to Captures. A Go value of type 'func' (i.e. a pointer to one or more functions) is a pointer whose pts() contains function objects. The valueNode() for an *ssa.Function returns a singleton for that function. Interfaces: An expression of type 'interface{...}' is a kind of pointer that points exclusively to tagged objects. All tagged objects pointed to by an interface are direct (the otIndirect flag is clear) and concrete (the tag type T is not itself an interface type). The associated ssa.Value for an interface's tagged objects may be an *ssa.MakeInterface instruction, or nil if the tagged object was created by an instrinsic (e.g. reflection). Constructing an interface value causes generation of constraints for all of the concrete type's methods; we can't tell a priori which ones may be called. TypeAssert y = x.(T) is implemented by a dynamic constraint triggered by each tagged object O added to pts(x): a typeFilter constraint if T is an interface type, or an untag constraint if T is a concrete type. A typeFilter tests whether O.typ implements T; if so, O is added to pts(y). An untagFilter tests whether O.typ is assignable to T,and if so, a copy edge O.v -> y is added. ChangeInterface is a simple copy because the representation of tagged objects is independent of the interface type (in contrast to the "method tables" approach used by the gc runtime). y := Invoke x.m(...) is implemented by allocating contiguous P/R blocks for the callsite and adding a dynamic rule triggered by each tagged object added to pts(x). The rule adds param/results copy edges to/from each discovered concrete method. (Q. Why do we model an interface as a pointer to a pair of type and value, rather than as a pair of a pointer to type and a pointer to value? A. Control-flow joins would merge interfaces ({T1}, {V1}) and ({T2}, {V2}) to make ({T1,T2}, {V1,V2}), leading to the infeasible and type-unsafe combination (T1,V2). Treating the value and its concrete type as inseparable makes the analysis type-safe.) Type parameters: Type parameters are not directly supported by the analysis. Calls to generic functions will be left as if they had empty bodies. Users of the package are expected to use the ssa.InstantiateGenerics builder mode when building code that uses or depends on code containing generics. reflect.Value: A reflect.Value is modelled very similar to an interface{}, i.e. as a pointer exclusively to tagged objects, but with two generalizations. 1. a reflect.Value that represents an lvalue points to an indirect (obj.flags ⊇ {otIndirect}) tagged object, which has a similar layout to an tagged object except that the value is a pointer to the dynamic type. Indirect tagged objects preserve the correct aliasing so that mutations made by (reflect.Value).Set can be observed. Indirect objects only arise when an lvalue is derived from an rvalue by indirection, e.g. the following code: Whether indirect or not, the concrete type of the tagged object corresponds to the user-visible dynamic type, and the existence of a pointer is an implementation detail. (NB: indirect tagged objects are not yet implemented) 2. The dynamic type tag of a tagged object pointed to by a reflect.Value may be an interface type; it need not be concrete. This arises in code such as this: pts(eface) is a singleton containing an interface{}-tagged object. That tagged object's payload is an interface{} value, i.e. the pts of the payload contains only concrete-tagged objects, although in this example it's the zero interface{} value, so its pts is empty. reflect.Type: Just as in the real "reflect" library, we represent a reflect.Type as an interface whose sole implementation is the concrete type, *reflect.rtype. (This choice is forced on us by go/types: clients cannot fabricate types with arbitrary method sets.) rtype instances are canonical: there is at most one per dynamic type. (rtypes are in fact large structs but since identity is all that matters, we represent them by a single node.) The payload of each *rtype-tagged object is an *rtype pointer that points to exactly one such canonical rtype object. We exploit this by setting the node.typ of the payload to the dynamic type, not '*rtype'. This saves us an indirection in each resolution rule. As an optimisation, *rtype-tagged objects are canonicalized too. Aggregate types: Aggregate types are treated as if all directly contained aggregates are recursively flattened out. Structs: *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create The nodes of a struct consist of a special 'identity' node (whose type is that of the struct itself), followed by the nodes for all the struct's fields, recursively flattened out. A pointer to the struct is a pointer to its identity node. That node allows us to distinguish a pointer to a struct from a pointer to its first field. Field offsets are logical field offsets (plus one for the identity node), so the sizes of the fields can be ignored by the analysis. (The identity node is non-traditional but enables the distinction described above, which is valuable for code comprehension tools. Typical pointer analyses for C, whose purpose is compiler optimization, must soundly model unsafe.Pointer (void*) conversions, and this requires fidelity to the actual memory layout using physical field offsets.) *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create Arrays: We model an array by an identity node (whose type is that of the array itself) followed by a node representing all the elements of the array; the analysis does not distinguish elements with different indices. Effectively, an array is treated like struct{elem T}, a load y=x[i] like y=x.elem, and a store x[i]=y like x.elem=y; the index i is ignored. A pointer to an array is pointer to its identity node. (A slice is also a pointer to an array's identity node.) The identity node allows us to distinguish a pointer to an array from a pointer to one of its elements, but it is rather costly because it introduces more offset constraints into the system. Furthermore, sound treatment of unsafe.Pointer would require us to dispense with this node. Arrays may be allocated by Alloc, by make([]T), by calls to append, and via reflection. Tuples (T, ...): Tuples are treated like structs with naturally numbered fields. *ssa.Extract is analogous to *ssa.Field. However, tuples have no identity field since by construction, they cannot be address-taken. There are three kinds of function call: Cases 1 and 2 apply equally to methods and standalone functions. Static calls: A static call consists three steps: A static function call is little more than two struct value copies between the P/R blocks of caller and callee: Context sensitivity: Static calls (alone) may be treated context sensitively, i.e. each callsite may cause a distinct re-analysis of the callee, improving precision. Our current context-sensitivity policy treats all intrinsics and getter/setter methods in this manner since such functions are small and seem like an obvious source of spurious confluences, though this has not yet been evaluated. Dynamic function calls: Dynamic calls work in a similar manner except that the creation of copy edges occurs dynamically, in a similar fashion to a pair of struct copies in which the callee is indirect: (Recall that the function object's P- and R-blocks are contiguous.) Interface method invocation: For invoke-mode calls, we create a params/results block for the callsite and attach a dynamic closure rule to the interface. For each new tagged object that flows to the interface, we look up the concrete method, find its function object, and connect its P/R blocks to the callsite's P/R blocks, adding copy edges to the graph during solving. Recording call targets: The analysis notifies its clients of each callsite it encounters, passing a CallSite interface. Among other things, the CallSite contains a synthetic constraint variable ("targets") whose points-to solution includes the set of all function objects to which the call may dispatch. It is via this mechanism that the callgraph is made available. Clients may also elect to be notified of callgraph edges directly; internally this just iterates all "targets" variables' pts(·)s. We implement Hash-Value Numbering (HVN), a pre-solver constraint optimization described in Hardekopf & Lin, SAS'07. This is documented in more detail in hvn.go. We intend to add its cousins HR and HU in future. The solver is currently a naive Andersen-style implementation; it does not perform online cycle detection, though we plan to add solver optimisations such as Hybrid- and Lazy- Cycle Detection from (Hardekopf & Lin, PLDI'07). It uses difference propagation (Pearce et al, SQC'04) to avoid redundant re-triggering of closure rules for values already seen. Points-to sets are represented using sparse bit vectors (similar to those used in LLVM and gcc), which are more space- and time-efficient than sets based on Go's built-in map type or dense bit vectors. Nodes are permuted prior to solving so that object nodes (which may appear in points-to sets) are lower numbered than non-object (var) nodes. This improves the density of the set over which the PTSs range, and thus the efficiency of the representation. Partly thanks to avoiding map iteration, the execution of the solver is 100% deterministic, a great help during debugging. Andersen, L. O. 1994. Program analysis and specialization for the C programming language. Ph.D. dissertation. DIKU, University of Copenhagen. David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Efficient field-sensitive pointer analysis for C. In Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE '04). ACM, New York, NY, USA, 37-42. http://doi.acm.org/10.1145/996821.996835 David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Online Cycle Detection and Difference Propagation: Applications to Pointer Analysis. Software Quality Control 12, 4 (December 2004), 311-337. http://dx.doi.org/10.1023/B:SQJO.0000039791.93071.a2 David Grove and Craig Chambers. 2001. A framework for call graph construction algorithms. ACM Trans. Program. Lang. Syst. 23, 6 (November 2001), 685-746. http://doi.acm.org/10.1145/506315.506316 Ben Hardekopf and Calvin Lin. 2007. The ant and the grasshopper: fast and accurate pointer analysis for millions of lines of code. In Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation (PLDI '07). ACM, New York, NY, USA, 290-299. http://doi.acm.org/10.1145/1250734.1250767 Ben Hardekopf and Calvin Lin. 2007. Exploiting pointer and location equivalence to optimize pointer analysis. In Proceedings of the 14th international conference on Static Analysis (SAS'07), Hanne Riis Nielson and Gilberto Filé (Eds.). Springer-Verlag, Berlin, Heidelberg, 265-280. Atanas Rountev and Satish Chandra. 2000. Off-line variable substitution for scaling points-to analysis. In Proceedings of the ACM SIGPLAN 2000 conference on Programming language design and implementation (PLDI '00). ACM, New York, NY, USA, 47-56. DOI=10.1145/349299.349310 http://doi.acm.org/10.1145/349299.349310 This program demonstrates how to use the pointer analysis to obtain a conservative call-graph of a Go program. It also shows how to compute the points-to set of a variable, in this case, (C).f's ch parameter.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM.
Package semver implements comparison of semantic version strings. In this package, semantic version strings may begin with a leading "v", as in "v1.0.0", or "1.0.0". The general form of a semantic version string accepted by this package is MAJOR[.MINOR[.PATCH[-PRERELEASE][+BUILD]]] where square brackets indicate optional parts of the syntax; MAJOR, MINOR, and PATCH are decimal integers without extra leading zeros; PRERELEASE and BUILD are each a series of non-empty dot-separated identifiers using only alphanumeric characters and hyphens; and all-numeric PRERELEASE identifiers must not have leading zeros. This package follows Semantic Versioning 2.0.0 (see semver.org) with one exception. It recognizes MAJOR and MAJOR.MINOR (with no prerelease or build suffixes) as shorthands for MAJOR.0.0 and MAJOR.MINOR.0.
dcrd is a full-node Decred implementation written in Go. The default options are sane for most users. This means dcrd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when dcrd starts up. By default, the configuration file is located at ~/.dcrd/dcrd.conf on POSIX-style operating systems and %LOCALAPPDATA%\dcrd\dcrd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package latlong maps from a latitude and longitude to a timezone. It uses the data from http://efele.net/maps/tz/world/ compressed down to an internal form optimized for low memory overhead and fast lookups at the expense of perfect accuracy when close to borders. The data files are compiled in to this package and do not require explicit loading.
sqltocsv is a package to make it dead easy to turn arbitrary database query results (in the form of database/sql Rows) into CSV output. Source and README at https://github.com/joho/sqltocsv
Package zapr defines an implementation of the github.com/go-logr/logr interfaces built on top of Zap (go.uber.org/zap). A new logr.Logger can be constructed from an existing zap.Logger using the NewLogger function: For the most part, concepts in Zap correspond directly with those in logr. Unlike Zap, all fields *must* be in the form of sugared fields -- it's illegal to pass a strongly-typed Zap field in a key position to any of the log methods. Levels in logr correspond to custom debug levels in Zap. Any given level in logr is represents by its inverse in zap (`zapLevel = -1*logrLevel`). For example V(2) is equivalent to log level -2 in Zap, while V(1) is equivalent to Zap's DebugLevel.
Package gokv contains a simple key-value store abstraction in the form of a Go interface. Implementations of the gokv.Store interface can be found in the sub-packages. Example code for using Redis: More details can be found on https://github.com/philippgille/gokv.
Package bitio provides an optimized bit-level Reader and Writer. You can use Reader.ReadBits() to read arbitrary number of bits from an io.Reader and return it as an uint64, and Writer.WriteBits() to write arbitrary number of bits of an uint64 value to an io.Writer. Both Reader and Writer also provide optimized methods for reading / writing 1 bit of information in the form of a bool value: Reader.ReadBool() and Writer.WriteBool(). These make this package ideal for compression algorithms that use Huffman coding for example, where decision whether to step left or right in the Huffman tree is the most frequent operation. Reader and Writer give a bit-level view of the underlying io.Reader and io.Writer, but they also provide a byte-level view (io.Reader and io.Writer) at the same time. This means you can also use the Reader.Read() and Writer.Write() methods to read and write slices of bytes. These will give you best performance if the underlying io.Reader and io.Writer are aligned to a byte boundary (else all the individual bytes are assembled from / spread to multiple bytes). You can ensure byte boundary alignment by calling the Align() method of Reader and Writer. As an extra, io.ByteReader and io.ByteWriter are also implemented. The more general highest-bits-first order is used. So for example if the input provides the bytes 0x8f and 0x55: Then ReadBits will return the following values: Writing the above values would result in the same sequence of bytes: All ReadXXX() and WriteXXX() methods return an error which you are expected to handle. For convenience, there are also matching TryReadXXX() and TryWriteXXX() methods which do not return an error. Instead they store the (first) error in the Reader.TryError / Writer.TryError field which you can inspect later. These TryXXX() methods are a no-op if a TryError has been encountered before, so it's safe to call multiple TryXXX() methods and defer the error checking. For example: This allows you to easily convert the result of individual ReadBits(), like this: And similarly: For performance reasons, Reader and Writer do not keep track of the number of read or written bits. If you happen to need the total number of processed bits, you may use the CountReader and CountWriter types which have identical API to that of Reader and Writer, but they also maintain the number of processed bits which you can query using the BitsCount field.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package beeline aids adding instrumentation to go apps using Honeycomb. This package and its subpackages contain bits of code to use to make your life easier when instrumenting a Go app to send events to Honeycomb. Most applications will use something out of the `wrappers` package and the `beeline` package. The `beeline` package provides the entry point - initialization and the basic method to add fields to events. The `trace` package offers more direct control over the generated events and how they connect together to form traces. It can be used if you need more functionality (eg asynchronous spans, other field naming standards, trace propagation). The `propagation`, `sample`, and `timer` packages are used internally and not very interesting. The `wrappers` package contains middleware to use with other existing packages such as HTTP routers (eg goji, gorilla, or just plain net/http) and SQL packages (including sqlx and pop). Finally the `examples` package contains small example applications that use the various wrappers and the beeline. Regardless of which subpackages are used, there is a small amount of global configuration to add to your application's startup process. At the bare minimum, you must pass in your team write key and identify a dataset name to authorize your code to send events to Honeycomb and tell it where to send events. Once configured, use one of the subpackages to wrap HTTP handlers and SQL db objects. There are runnable examples at https://github.com/honeycombio/beeline-go/tree/main/examples and examples of each wrapper in the godoc. The most complete example is in `nethttp`; it covers - beeline initialization - using the net/http wrapper - creating additional spans for larger chunks of work - wrapping an outbound http call - modifying spans on the way out to scrub information - a custom sampling method TODO create two comprehensive examples, one showing basic beeline use and the other the more exciting things you can do with direct access to the trace and span objects.
Package conf provides support for using environmental variables and command line arguments for configuration. It is compatible with the GNU extensions to the POSIX recommendations for command-line options. See http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html There are no hard bindings for this package. This package takes a struct value and parses it for both the environment and flags. It supports several tags to customize the flag options. The field name and any parent struct name will be used for the long form of the command name unless the name is overridden. As an example, this config struct: Would produce the following usage output: Usage: conf.test [options] [arguments] OPTIONS The API is a single call to Parse Additionally, if the config struct has a field of the slice type conf.Args then it will be populated with any remaining arguments from the command line after flags have been processed. For example a program with a config struct like this: If that program is executed from the command line like this: Then the cfg.Args field will contain the string values ["serve", "http"]. The Args type has a method Num for convenient access to these arguments such as this: You can add a version with a description by adding the Version type to your config type Then you can set these values at run time for display.