Package openvg is a wrapper to a C library of high-level 2D graphics operations built on OpenVG 1.1 The typical "hello world" program looks like this: The Init function provides the necessary graphics subsystem initialization and the dimensions of the whole canvas. The Init() call must be paired with a corresponding Finish() call, which performs an orderly shutdown. Typically a "drawing" begins with the Start() call, and ends with End(). A program can have an arbitrary set of Start()/End() pairs. The coordinate system uses float64 coordinates, with the origin at the lower left, with x increasing to the right, and y increasing upwards. Currently, the library provides no mouse or keyboard events, other than those provided by the base operating system. It is typical to pause for user input between drawings by reading standard input. The library's functionally includes shapes, attributes, transformations, text, images, and convenince functions. Shape functions include Polygon, Polyline, Cbezier, Qbezier, Rect, Roundrect, Line, Elipse, Circle, and Arc. Transformation functions are: Translate, Rotate, Shear, and Scale. For displaying and measuring text: Text, TextMid, TextEnd, and TextWidth. The attribute functions are StrokeColor, StrokeRGB, StrokeWidth, and FillRGB, FillColor, FillLinearGradient, and FillRadialGradient. Colors are specfied with RGB triples (0-255) with alpha values (0.0-1.0), or named colors as specified by the SVG standard. Convenience functions are used to set the Background color, start the drawing with a background color, and save the raster to a file. The input terminal may be set/restored to/from raw and cooked mode. Package openvg is a high-level 2D vector graphics library built on OpenVG
shard.core is a full-node bitcoin implementation written in Go. The default options are sane for most users. This means shard.core will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when shard.core starts up. By default, the configuration file is located at ~/.shard.core/shard.core.yaml on POSIX-style operating systems and %LOCALAPPDATA%\shard.core\shard.core.yaml on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package amqp is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the sychronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.Publish or Channel.Consume. SSL/TLS - Secure connections When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate dial your own connection. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html This exports a Session object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and `rabbitmq-server` in another. Stop & restart RabbitMQ to see how the queue reacts.
Code generated by go-bindata. (@generated) DO NOT EDIT. sources: sample-bsvd.conf bsvd is a full-node Bitcoin (BSV) implementation written in Go. The default options are sane for most users. This means bsvd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when bsvd starts up. By default, the configuration file is located at ~/.bsvd/bsvd.conf on POSIX-style operating systems and %LOCALAPPDATA%\bsvd\bsvd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package datastore has an abstract representation of (AppEngine | Cloud) Datastore. repository https://github.com/mercari/datastore Let's read https://cloud.google.com/datastore/docs/ or https://cloud.google.com/appengine/docs/standard/go/datastore/ . You should also check https://godoc.org/cloud.google.com/go/datastore or https://godoc.org/google.golang.org/appengine/datastore as datastore original library. Japanese version https://github.com/mercari/datastore/blob/master/doc_ja.go Please see https://godoc.org/go.mercari.io/datastore/clouddatastore or https://godoc.org/go.mercari.io/datastore/aedatastore . Create a Client using the FromContext function of each package. Later in this document, notes on migration from each package are summarized. Please see also there. This package is based on the newly designed Cloud Datastore API. We are introducing flatten tags that only exist in Cloud Datastore, we need to be careful when migrating from AE Datastore. Details will be described later. If you are worried, you may have a clue to the solution at https://godoc.org/go.mercari.io/datastore/clouddatastore . This package has three main objectives. We are forced to make functions that are not directly related to the value of the application for speed, stability and operation. Such functions can be abstracted and used as middleware. Let's think about this case. Put Entity to Datastore and set it to Memcache or Redis. Next, when getting from Datastore, Get from Memcache first, Get it again from Datastore if it fails. It is very troublesome to provide these operations for all Kind and all Entity operations. However, if the middleware intervenes with all Datastore RPCs, you can transparently process without affecting the application code. As another case, RPC sometimes fails. If it fails, the process often succeeds simply by retrying. For easy RET retry with all RPCs, it is better to implement it as middleware. Please refer to https://godoc.org/go.mercari.io/datastore/dsmiddleware if you want to know the middleware already provided. The same interface is provided for AppEngine Datastore and Cloud Datastore. These two are compatible, you can run it with exactly the same code after creating the Client. For example, you can use AE Datastore in a production environment and Cloud Datastore Emulator in UnitTest. If you can avoid goapp, tests may be faster and IDE may be more vulnerable to debugging. You can also read data from the local environment via Cloud Datastore for systems running on AE Datastore. Caution. Although the storage bodies of RPCs of AE Datastore and Cloud Datastore are shared, there is a difference in expressiveness at the API level. Please carefully read the data written in AE Datastore carelessly on Cloud Datastore and do not update it. It may become impossible to read from the API of AE Datastore side. About this, we have not strictly tested. The operation of Datastore has very little latency with respect to RPC's network. When acquiring 10 entities it means that GetMulti one time is better than getting 10 times using loops. However, we are not good at putting together multiple processes at once. Suppose, for example, you want to query on Post Kind, use the list of Comment IDs of the resulting Post, and get a list of Comments. For example, you can query Post Kind and get a list of Post. In addition, consider using CommentIDs of Post and getting a list of Comment. This is enough Query + 1 GetMulti is enough if you write very clever code. However, after acquiring the data, it is necessary to link the Comment list with the appropriate Post. On the other hand, you can easily write a code that throws a query once and then GetMulti the Comment as many as Post. In summary, it is convenient to have Put or Get queued, and there is a mechanism to execute it collectively later. Batch() is it! You can find the example at https://godoc.org/go.mercari.io/datastore/#pkg-examples . I love goon. So I made https://godoc.org/go.mercari.io/datastore/boom which can be used in conjunction with this package. Here's an overview of what you need to do to migrate your existing code. from AE Datastore from Cloud Datastore from goon to boom
Package verifiedpermissions provides the API client, operations, and parameter types for Amazon Verified Permissions. Amazon Verified Permissions is a permissions management service from Amazon Web Services. You can use Verified Permissions to manage permissions for your application, and authorize user access based on those permissions. Using Verified Permissions, application developers can grant access based on information about the users, resources, and requested actions. You can also evaluate additional information like group membership, attributes of the resources, and session context, such as time of request and IP addresses. Verified Permissions manages these permissions by letting you create and store authorization policies for your applications, such as consumer-facing web sites and enterprise business systems. Verified Permissions uses Cedar as the policy language to express your permission requirements. Cedar supports both role-based access control (RBAC) and attribute-based access control (ABAC) authorization models. For more information about configuring, administering, and using Amazon Verified Permissions in your applications, see the Amazon Verified Permissions User Guide. For more information about the Cedar policy language, see the Cedar Policy Language Guide. When you write Cedar policies that reference principals, resources and actions, you can define the unique identifiers used for each of those elements. We strongly recommend that you follow these best practices: For example, if user jane leaves the company, and you later let someone else use Where you use a UUID for an entity, we recommend that you follow it with the // Several operations return structures that appear similar, but have different purposes. As new functionality is added to the product, the structure used in a parameter of one operation might need to change in a way that wouldn't make sense for the same parameter in a different operation. To help you understand the purpose of each, the following naming convention is used for the structures: Parameter type structures that end in Detail are used in Get operations. Parameter type structures that end in Item are used in List operations. Parameter type structures that use neither suffix are used in the mutating (create and update) operations.
Package tz contains time zone info so that you can predictably load Locations regardless of the locations available on the locally running operating system. The stdlib time.LoadLocation function loads timezone data from the operating system or from zoneinfo.zip in a local Go installation. Both of these are often missing from some operating systems, especially Windows. This package has the zoneinfo.zip from Go embedded into the package so that queries to load a location always return the same data regardless of operating system. This package exists because of https://github.com/golang/go/issues/21881.
Package configdir provides a cross platform means of detecting the system's configuration directories. This makes it easy to program your Go app to store its configuration files in a standard location relevant to the host operating system. For Linux and some other Unixes (like BSD) this means following the Freedesktop.org XDG Base Directory Specification 0.8, and for Windows and macOS it uses their standard directories. This is a simple no-nonsense module that just gives you the paths, with optional components tacked on the end for vendor- or app-specific namespacing. It also provides a convenience function to call `os.MkdirAll()` on the paths to ensure they exist and are ready for you to read and write files in. Standard Global Configuration Paths Standard User-Specific Configuration Paths Standard User-Specific Cache Paths Quick start example for a common use case of this module.
Package serverd provides a component to create server-type applications, helping to manage the life cycle of services and operating system signals. This package is a work in progress and makes no API stability promises.
hcd is a full-node HC implementation written in Go. The default options are sane for most users. This means hcd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when hcd starts up. By default, the configuration file is located at ~/.hcd/hcd.conf on POSIX-style operating systems and %LOCALAPPDATA%\hcd\hcd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package trayhost is a cross-platform Go library to place an icon in the host operating system's taskbar. - macOS - Fully implemented and supported by @dmitshur. - Linux - Partially implemented, but unsupported (needs an owner/maintainer). - Windows - Partially implemented, but unsupported (needs an owner/maintainer). On macOS, for Notification Center user notifications to work, your Go binary that uses trayhost must be a part of a standard macOS app bundle. Most other functionality of trayhost will be available if the binary is not a part of app bundle, but you will get a terminal pop up, and you will not be able to configure some aspects of the app. Here's a minimal layout of an app bundle: Here's a minimal Info.plist file as reference (only the entries that are needed, nothing extra): - CFBundleIdentifier needs to be set to some value for Notification Center to work. - The binary must be inside Contents/MacOS directory for Notification Center to work. - NSHighResolutionCapable to enable Retina mode. - LSUIElement is needed to make the app not appear in Cmd+Tab list and the dock while still being able to show a tooltip in the menu bar. On macOS, when you run an app bundle, the working directory of the executed process is the root directory (/), not the app bundle's Contents/Resources directory. Change directory to Resources if you need to load resources from there.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages.
Package kstat provides a Go interface to the Solaris/OmniOS kstat(s) system for user-level access to a lot of kernel statistics. For more documentation on kstats, see kstat(1) and kstat(3kstat). The package can retrieve what are called 'named' kstat statistics, IO statistics, and the most common additional types of 'raw' statistics, which covers almost all kstats you will normally find in the kernel. You can see the names and types of other kstats, but not currently retrieve data for them. Named statistics are the most common type for general information; IO statistics are exported by disks and some other things. Supported additional raw kstats are unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo. General usage for named statistics: call Open() to obtain a Token, then call GetNamed() on it to obtain Named(s) for specific statistics. Note that this always gives you the very latest value for the statistic. If you want a number of statistics from the same module:inst:name triplet (eg several network counters from the same network interface) and you want them to all have been gathered at the same time, you need to call .Lookup() to obtain a KStat and then repeatedly call its .GetNamed() (this is also slightly more efficient). The short version: a kstat is a collection of some related statistics, eg various network counters for a particular network interface. A Token is a handle for a collection of kstats. You go collection (Token) -> kstat (KStat) -> specific statistic (Named) in order to retrieve the value of a specific statistic. (IO stats are retrieved all at once with GetIO(), because they come to us from the kernel as one single struct so that's what you get.) This is a cgo-based package. Cross compilation is up to you. Goroutine safety is in no way guaranteed because the underlying C kstat library is probably not thread or goroutine safe (and there are some all-Go concurrency races involving .Close()). This package may leak memory, especially since the Solaris kstat manpage is not clear on the requirements here. However I believe it's reasonably memory safe. It's possible to totally corrupt memory with use-after-free errors if you do operations on kstats after calling Token.Close(), although we try to avoid that. NOTE: this package is quite young. The API may well change as I (and other people) gain more experience with it. In general this is not going to be as lean and mean as calling C directly, partly because of intrinsic CGo overheads and partly because we do more memory allocation and deallocation than a C program would (partly because we prioritize not leaking memory). We support named kstats and IO kstats (KSTAT_TYPE_NAMED and KSTAT_TYPE_IO / kstat_io_t respectively). kstat(1) also knows about a number of magic specific 'raw' stats (which are generally custom C structs); of these we support unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo for NFS filesystem mounts. In theory kstat supports general timer and interrupt stats. In practice there is no use of KSTAT_TYPE_TIMER in the current Illumos kernel source and very little use of KSTAT_TYPE_INTR (mostly by very old hardware drivers, although the vioif driver uses it too). Since I can't test KSTAT_TYPE_INTR stats, we don't currently support it. There are also a few additional KSTAT_TYPE_RAW raw stats that we don't support, mostly because they seem to be effectively obsolete. These specific raw stats can be found listed in the Illumos source code in cmd/stat/kstat/kstat.h in the ks_raw_lookup array. See cmd/stat/kstat/kstat.c for how they're interpreted. If you need access to one of these kstats, the KStat.CopyTo() and KStat.Raw() methods give you an escape hatch to roll your own. You'll probably need to use cgo to generate an appropriate Go struct that matches the C struct you need. My notes on this process may be helpful: https://utcc.utoronto.ca/~cks/space/blog/programming/GoCGoCompatibleStructs Author: Chris Siebenmann https://github.com/siebenmann/go-kstat Copyright: standard Go copyright. (If you're reading this documentation on a non-Solaris platform, you're probably not seeing the detailed API documentation for constants, types, and so on because of tooling limitations in godoc et al.)
Package kstat provides a Go interface to the Solaris/OmniOS kstat(s) system for user-level access to a lot of kernel statistics. For more documentation on kstats, see kstat(1) and kstat(3kstat). The package can retrieve what are called 'named' kstat statistics, IO statistics, and the most common additional types of 'raw' statistics, which covers almost all kstats you will normally find in the kernel. You can see the names and types of other kstats, but not currently retrieve data for them. Named statistics are the most common type for general information; IO statistics are exported by disks and some other things. Supported additional raw kstats are unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo. General usage for named statistics: call Open() to obtain a Token, then call GetNamed() on it to obtain Named(s) for specific statistics. Note that this always gives you the very latest value for the statistic. If you want a number of statistics from the same module:inst:name triplet (eg several network counters from the same network interface) and you want them to all have been gathered at the same time, you need to call .Lookup() to obtain a KStat and then repeatedly call its .GetNamed() (this is also slightly more efficient). The short version: a kstat is a collection of some related statistics, eg various network counters for a particular network interface. A Token is a handle for a collection of kstats. You go collection (Token) -> kstat (KStat) -> specific statistic (Named) in order to retrieve the value of a specific statistic. (IO stats are retrieved all at once with GetIO(), because they come to us from the kernel as one single struct so that's what you get.) This is a cgo-based package. Cross compilation is up to you. Goroutine safety is in no way guaranteed because the underlying C kstat library is probably not thread or goroutine safe (and there are some all-Go concurrency races involving .Close()). This package may leak memory, especially since the Solaris kstat manpage is not clear on the requirements here. However I believe it's reasonably memory safe. It's possible to totally corrupt memory with use-after-free errors if you do operations on kstats after calling Token.Close(), although we try to avoid that. NOTE: this package is quite young. The API may well change as I (and other people) gain more experience with it. In general this is not going to be as lean and mean as calling C directly, partly because of intrinsic CGo overheads and partly because we do more memory allocation and deallocation than a C program would (partly because we prioritize not leaking memory). We support named kstats and IO kstats (KSTAT_TYPE_NAMED and KSTAT_TYPE_IO / kstat_io_t respectively). kstat(1) also knows about a number of magic specific 'raw' stats (which are generally custom C structs); of these we support unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo for NFS filesystem mounts. In theory kstat supports general timer and interrupt stats. In practice there is no use of KSTAT_TYPE_TIMER in the current Illumos kernel source and very little use of KSTAT_TYPE_INTR (mostly by very old hardware drivers, although the vioif driver uses it too). Since I can't test KSTAT_TYPE_INTR stats, we don't currently support it. There are also a few additional KSTAT_TYPE_RAW raw stats that we don't support, mostly because they seem to be effectively obsolete. These specific raw stats can be found listed in the Illumos source code in cmd/stat/kstat/kstat.h in the ks_raw_lookup array. See cmd/stat/kstat/kstat.c for how they're interpreted. If you need access to one of these kstats, the KStat.CopyTo() and KStat.Raw() methods give you an escape hatch to roll your own. You'll probably need to use cgo to generate an appropriate Go struct that matches the C struct you need. My notes on this process may be helpful: https://utcc.utoronto.ca/~cks/space/blog/programming/GoCGoCompatibleStructs Author: Chris Siebenmann https://github.com/siebenmann/go-kstat Copyright: standard Go copyright. (If you're reading this documentation on a non-Solaris platform, you're probably not seeing the detailed API documentation for constants, types, and so on because of tooling limitations in godoc et al.)
kms is a package that provides key management system features for go-kms-wrapping Wrappers. The following domain terms are key to understanding the system and how to use it: wrapper: all keys within the system are a Wrapper from go-kms-wrapping. root external wrapper: the external wrapper that will serve as the root of trust for the kms system. Typically you'd get this root wrapper via go-kms-wrapper from a KMS provider. See the go-kms-wrapper docs for further details. scope: a scope defines a rotational boundary for a set of keys. The system tracks only the scope identifier and which is used to find keys with a specific scope. **IMPORTANT**: You should define a FK from kms_root_key.scope_id with cascading deletes and updates to the PK of the table within your domain that tracks scopes. This FK will prevent orphaned kms keys. For example, you could assign organizations and projects scope IDs and then associate a set of keys with each org and project within your domain. root key: the KEKs (keys to encrypt keys) of the system. data key: the DEKs (keys to encrypt data) of the system and must have a parent root key and a purpose. purpose: Each data key (DEK) must have a one purpose. For example, you could define a purpose of client-secrets for a DEK that will be used encrypt/decrypt wrapper operations on `client-secrets` You'll find the database schema within the migrations directory. Currently postgres and sqlite are supported. The implementation does make some use of triggers to ensure some of its data integrity. The migrations are intended to be incorporated into your existing go-migrate migrations. Feel free to change the migration file names, as long as they are applied in the same order as defined here. FYI, the migrations include `kms_version` table which is used to ensure that the schema and module are compatible.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages.
btcd is a full-node bitcoin implementation written in Go. The default options are sane for most users. This means btcd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when btcd starts up. By default, the configuration file is located at ~/.btcd/btcd.conf on POSIX-style operating systems and %LOCALAPPDATA%\btcd\btcd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package phony is a small actor model library for Go, inspired by the causal messaging system in the Pony programming language. An Actor is an interface satisfied by a lightweight Inbox struct. Structs that embed an Inbox satisfy an interface that allows them to send messages to eachother. Messages are functions of 0 arguments, typically closures, and should not perform blocking operations. Message passing is asynchronous, causal, and fast. Actors implemented by the provided Inbox struct are scheduled to prevent messages queues from growing too large, by pausing at safe breakpoints when an Actor detects that it sent something to another Actor whose inbox is flooded.
Package goroslib is a library in pure Go that allows to build clients (nodes) for the Robot Operating System (ROS). Examples are available at https://github.com/aler9/goroslib/tree/master/examples
btcd is a full-node bitcoin implementation written in Go. The default options are sane for most users. This means btcd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when btcd starts up. By default, the configuration file is located at ~/.btcd/btcd.conf on POSIX-style operating systems and %LOCALAPPDATA%\btcd\btcd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
dcrd is a full-node Decred implementation written in Go. The default options are sane for most users. This means dcrd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when dcrd starts up. By default, the configuration file is located at ~/.dcrd/dcrd.conf on POSIX-style operating systems and %LOCALAPPDATA%\dcrd\dcrd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package iox contains I/O utilities. The primary concern of the package is managing and minimizing the use of file descriptors, an operating system resource which is often in short supply in high-concurrency servers. The two objects that help in this are the Filer and BufferFile. A filer manages a allotment of file descriptors, blocking on file creation until an old file closes and frees up a descriptor allotment. A BufferFile keeps a fraction of its contents in memory. If the number of bytes stored in a BufferFile is small, no file descriptor is ever used.
package Wildmatch is an implementation of Git's wildmatch.c-style pattern matching. Wildmatch patterns are comprised of any combination of the following three components: String literals. A string literal is "foo", or "foo\*" (matching "foo", and "foo\", respectively). In general, string literals match their exact contents in a filepath, and cannot match over directories unless they include the operating system-specific path separator. Wildcards. There are three types of wildcards: Single-asterisk ('*'): matches any combination of characters, any number of times. Does not match path separators. Single-question mark ('?'): matches any single character, but not a path separator. Double-asterisk ('**'): greedily matches any number of directories. For example, '**/foo' matches '/foo', 'bar/baz/woot/foot', but not 'foo/bar'. Double-asterisks must be separated by filepath separators on either side. Character groups. A character group is composed of a set of included and excluded character types. The set of included character types begins the character group, and a '^' or '!' separates it from the set of excluded character types. A character type can be one of the following: Character literal: a single character, i.e., 'c'. Character group: a group of characters, i.e., '[:alnum:]', etc. Character range: a range of characters, i.e., 'a-z'. A Wildmatch pattern can be any combination of the above components, in any ordering, and repeated any number of times.
btcd is a full-node bitcoin implementation written in Go. The default options are sane for most users. This means btcd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when btcd starts up. By default, the configuration file is located at ~/.btcd/btcd.conf on POSIX-style operating systems and %LOCALAPPDATA%\btcd\btcd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options:
Package ora implements an Oracle database driver. An Oracle database may be accessed through the database/sql package or through the ora package directly. database/sql offers connection pooling, thread safety, a consistent API to multiple database technologies and a common set of Go types. The ora package offers additional features including pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. The ora package is written with the Oracle Call Interface (OCI) C-language libraries provided by Oracle. The OCI libraries are a standard for client application communication and driver communication with Oracle databases. The ora package has been verified to work with: Minimum requirements are Go 1.3 with CGO enabled, a GCC C compiler, and Oracle 11g (11.2.0.4.0) or Oracle Instant Client (11.2.0.4.0). Install Oracle or Oracle Instant Client. Copy the [oci8.pc](contrib/oci8.pc) from the `contrib` folder (or the one for your system, maybe tailored to your specific locations) to a folder in `$PKG_CONFIG_PATH` or a system folder, such as The ora package has no external Go dependencies and is available on GitHub and gopkg.in: The ora package supports all built-in Oracle data types. The supported Oracle built-in data types are NUMBER, BINARY_DOUBLE, BINARY_FLOAT, FLOAT, DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND, CHAR, NCHAR, VARCHAR, VARCHAR2, NVARCHAR2, LONG, CLOB, NCLOB, BLOB, LONG RAW, RAW, ROWID and BFILE. SYS_REFCURSOR is also supported. Oracle does not provide a built-in boolean type. Oracle provides a single-byte character type. A common practice is to define two single-byte characters which represent true and false. The ora package adopts this approach. The oracle package associates a Go bool value to a Go rune and sends and receives the rune to a CHAR(1 BYTE) column or CHAR(1 CHAR) column. The default false rune is zero '0'. The default true rune is one '1'. The bool rune association may be configured or disabled when directly using the ora package but not with the database/sql package. Within a SQL string a placeholder may be specified to indicate where a Go variable is placed. The SQL placeholder is an Oracle identifier, from 1 to 30 characters, prefixed with a colon (:). For example: Placeholders within a SQL statement are bound by position. The actual name is not used by the ora package driver e.g., placeholder names :c1, :1, or :xyz are treated equally. You may access an Oracle database through the database/sql package. The database/sql package offers a consistent API across different databases, connection pooling, thread safety and a set of common Go types. database/sql makes working with Oracle straight-forward. The ora package implements interfaces in the database/sql/driver package enabling database/sql to communicate with an Oracle database. Using database/sql ensures you never have to call the ora package directly. When using database/sql, the mapping between Go types and Oracle types may be changed slightly. The database/sql package has strict expectations on Go return types. The Go-to-Oracle type mapping for database/sql is: The "ora" driver is automatically registered for use with sql.Open, but you can call ora.SetDrvCfg to set the used configuration options including statement configuration and Rset configuration. When configuring the driver for use with database/sql, keep in mind that database/sql has strict Go type-to-Oracle type mapping expectations. The ora package allows programming with pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. When working with the ora package directly, the API is slightly different than database/sql. When using the ora package directly, the mapping between Go types and Oracle types may be changed. The Go-to-Oracle type mapping for the ora package is: An example of using the ora package directly: Pointers may be used to capture out-bound values from a SQL statement such as an insert or stored procedure call. For example, a numeric pointer captures an identity value: A string pointer captures an out parameter from a stored procedure: Slices may be used to insert multiple records with a single insert statement: The ora package provides nullable Go types to support DML operations such as insert and select. The nullable Go types provided by the ora package are Int64, Int32, Int16, Int8, Uint64, Uint32, Uint16, Uint8, Float64, Float32, Time, IntervalYM, IntervalDS, String, Bool, Binary and Bfile. For example, you may insert nullable Strings and select nullable Strings: The Stmt.Prep method is variadic accepting zero or more GoColumnType which define a Go return type for a select-list column. For example, a Prep call can be configured to return an int64 and a nullable Int64 from the same column: Go numerics of various sizes are supported in DML operations. The ora package supports int64, int32, int16, int8, uint64, uint32, uint16, uint8, float64 and float32. For example, you may insert a uint16 and select numerics of various sizes: If a non-nullable type is defined for a nullable column returning null, the Go type's zero value is returned. GoColumnTypes defined by the ora package are: When Stmt.Prep doesn't receive a GoColumnType, or receives an incorrect GoColumnType, the default value defined in RsetCfg is used. EnvCfg, SrvCfg, SesCfg, StmtCfg and RsetCfg are the main configuration structs. EnvCfg configures aspects of an Env. SrvCfg configures aspects of a Srv. SesCfg configures aspects of a Ses. StmtCfg configures aspects of a Stmt. RsetCfg configures aspects of Rset. StmtCfg and RsetCfg have the most options to configure. RsetCfg defines the default mapping between an Oracle select-list column and a Go type. StmtCfg may be set in an EnvCfg, SrvCfg, SesCfg and StmtCfg. RsetCfg may be set in a Stmt. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg may optionally be specified to configure a statement. If StmtCfg isn't specified default values are applied. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg cascade to new descendent structs. When ora.OpenEnv() is called a specified EnvCfg is used or a default EnvCfg is created. Creating a Srv with env.OpenSrv() will use SrvCfg.StmtCfg if it is specified; otherwise, EnvCfg.StmtCfg is copied by value to SrvCfg.StmtCfg. Creating a Ses with srv.OpenSes() will use SesCfg.StmtCfg if it is specified; otherwise, SrvCfg.StmtCfg is copied by value to SesCfg.StmtCfg. Creating a Stmt with ses.Prep() will use SesCfg.StmtCfg if it is specified; otherwise, a new StmtCfg with default values is set on the Stmt. Call Stmt.Cfg() to change a Stmt's configuration. An Env may contain multiple Srv. A Srv may contain multiple Ses. A Ses may contain multiple Stmt. A Stmt may contain multiple Rset. Setting a RsetCfg on a StmtCfg does not cascade through descendent structs. Configuration of Stmt.Cfg takes effect prior to calls to Stmt.Exe and Stmt.Qry; consequently, any updates to Stmt.Cfg after a call to Stmt.Exe or Stmt.Qry are not observed. One configuration scenario may be to set a server's select statements to return nullable Go types by default: Another scenario may be to configure the runes mapped to bool values: Oracle-specific types offered by the ora package are ora.Rset, ora.IntervalYM, ora.IntervalDS, ora.Raw, ora.Lob and ora.Bfile. ora.Rset represents an Oracle SYS_REFCURSOR. ora.IntervalYM represents an Oracle INTERVAL YEAR TO MONTH. ora.IntervalDS represents an Oracle INTERVAL DAY TO SECOND. ora.Raw represents an Oracle RAW or LONG RAW. ora.Lob may represent an Oracle BLOB or Oracle CLOB. And ora.Bfile represents an Oracle BFILE. ROWID columns are returned as strings and don't have a unique Go type. Rset is used to obtain Go values from a SQL select statement. Methods Rset.Next, Rset.NextRow, and Rset.Len are available. Fields Rset.Row, Rset.Err, Rset.Index, and Rset.ColumnNames are also available. The Next method attempts to load data from an Oracle buffer into Row, returning true when successful. When no data is available, or if an error occurs, Next returns false setting Row to nil. Any error in Next is assigned to Err. Calling Next increments Index and method Len returns the total number of rows processed. The NextRow method is convenient for returning a single row. NextRow calls Next and returns Row. ColumnNames returns the names of columns defined by the SQL select statement. Rset has two usages. Rset may be returned from Stmt.Qry when prepared with a SQL select statement: Or, *Rset may be passed to Stmt.Exe when prepared with a stored procedure accepting an OUT SYS_REFCURSOR parameter: Stored procedures with multiple OUT SYS_REFCURSOR parameters enable a single Exe call to obtain multiple Rsets: The types of values assigned to Row may be configured in StmtCfg.Rset. For configuration to take effect, assign StmtCfg.Rset prior to calling Stmt.Qry or Stmt.Exe. Rset prefetching may be controlled by StmtCfg.PrefetchRowCount and StmtCfg.PrefetchMemorySize. PrefetchRowCount works in coordination with PrefetchMemorySize. When PrefetchRowCount is set to zero only PrefetchMemorySize is used; otherwise, the minimum of PrefetchRowCount and PrefetchMemorySize is used. The default uses a PrefetchMemorySize of 134MB. Opening and closing Rsets is managed internally. Rset does not have an Open method or Close method. IntervalYM may be be inserted and selected: IntervalDS may be be inserted and selected: Transactions on an Oracle server are supported. DML statements auto-commit unless a transaction has started: Ses.PrepAndExe, Ses.PrepAndQry, Ses.Ins, Ses.Upd, and Ses.Sel are convenient one-line methods. Ses.PrepAndExe offers a convenient one-line call to Ses.Prep and Stmt.Exe. Ses.PrepAndQry offers a convenient one-line call to Ses.Prep and Stmt.Qry. Ses.Ins composes, prepares and executes a sql INSERT statement. Ses.Ins is useful when you have to create and maintain a simple INSERT statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Ins is easy to read and revise. Ses.Upd composes, prepares and executes a sql UPDATE statement. Ses.Upd is useful when you have to create and maintain a simple UPDATE statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Upd is easy to read and revise. Ses.Sel composes, prepares and queries a sql SELECT statement. Ses.Sel is useful when you have to create and maintain a simple SELECT statement with a long list of columns that have non-default GoColumnTypes. As table columns are added and dropped over the lifetime of a table Ses.Sel is easy to read and revise. The Ses.Ping method checks whether the client's connection to an Oracle server is valid. A call to Ping requires an open Ses. Ping will return a nil error when the connection is fine: The Srv.Version method is available to obtain the Oracle server version. A call to Version requires an open Ses: Further code examples are available in the example file, test files and samples folder. The ora package provides a simple ora.Logger interface for logging. Logging is disabled by default. Specify one of three optional built-in logging packages to enable logging; or, use your own logging package. ora.Cfg().Log offers various options to enable or disable logging of specific ora driver methods. For example: To use the standard Go log package: which produces a sample log of: Messages are prefixed with 'ORA I' for information or 'ORA E' for an error. The log package is configured to write to os.Stderr by default. Use the ora/lg.Std type to configure an alternative io.Writer. To use the glog package: which produces a sample log of: To use the log15 package: which produces a sample log of: See https://github.com/rana/ora/tree/master/samples/lg15/main.go for sample code which uses the log15 package. Tests are available and require some setup. Setup varies depending on whether the Oracle server is configured as a container database or non-container database. It's simpler to setup a non-container database. An example for each setup is explained. Non-container test database setup steps: Container test database setup steps: Some helpful SQL maintenance statements: Run the tests. database/sql method Stmt.QueryRow is not supported. Copyright 2015 Rana Ian. All rights reserved. Use of this source code is governed by The MIT License found in the accompanying LICENSE file.
Package goroslib is a library in pure Go that allows to build clients (nodes) for the Robot Operating System (ROS). Examples are available at https://github.com/bluenviron/goroslib/tree/main/examples
Package grip provides a flexible logging package for basic Go programs. Drawing inspiration from Go and Python's standard library logging, as well as systemd's journal service, and other logging systems, Grip provides a number of very powerful logging abstractions in one high-level package. The central type of the grip package is the Journaler type, instances of which provide distinct log capturing system. For ease, following from the Go standard library, the grip package provides parallel public methods that use an internal "standard" Jouernaler instance in the grip package, which has some defaults configured and may be sufficient for many use cases. The send.Sender interface provides a way of changing the logging backend, and the send package provides a number of alternate implementations of logging systems, including: systemd's journal, logging to standard output, logging to a file, and generic syslog support. The message.Composer interface is the representation of all messages. They are implemented to provide a raw structured form as well as a string representation for more conentional logging output. Furthermore they are intended to be easy to produce, and defer more expensive processing until they're being logged, to prevent expensive operations producing messages that are below threshold. The MutiCatcher type makes it possible to collect from a group of operations and then aggregate them as a single error. Loging helpers exist for the following levels: These methods accept both strings (message content,) or types that implement the message.MessageComposer interface. Composer types make it possible to delay generating a message unless the logger is over the logging threshold. Use this to avoid expensive serialization operations for suppressed logging operations. All levels also have additional methods with `ln` and `f` appended to the end of the method name which allow Println() and Printf() style functionality. You must pass printf/println-style arguments to these methods. The Conditional logging methods take two arguments, a Boolean, and a message argument. Messages can be strings, objects that implement the MessageComposer interface, or errors. If condition boolean is true, the threshold level is met, and the message to log is not an empty string, then it logs the resolved message. Use conditional logging methods to potentially suppress log messages based on situations orthogonal to log level, with "log sometimes" or "log rarely" semantics. Combine with MessageComposers to to avoid expensive message building operations.
File Structures 2 This is a follow up to my crufty http://github.com/timtadh/file-structures work. That system has some endemic problems: 1. It uses the read/write interface to files. This means that it needs to do block management and cache management. In theory this can be very fast but it is also very challenging in Go. 2. Because it uses read/write it has to do buffer management. Largely, the system punts on this problem and allows go to handle the buffer management through the normal memory management system. This doesn't work especially well for the use case of file-structures. File Structures 2 is an experiment to bring Memory Mapped IO to the world of Go. The hypotheses are: 1. The operating system is good at page management generally. While, we know more about how to manage the structure of B+Trees, VarChar stores, and Linear Hash tables than the OS there is no indication that from Go you can acheive better performance. Therefore, I hypothesize that leaving it to the OS will lead to a smaller working set and a faster data structure in general. 2. You can make Memory Mapping performant in Go. There are many challenges here. The biggest of which is that there are no dynamically size array TYPES in go. The size of the array is part of the type, you have to use slices. This creates complications when hooking up structures which contain slices to mmap allocated blocks of memory. I hypothesize that this repository can acheive good (enough) performance here. The major components of this project: 1. fmap - a memory mapped file inteface. Part C part Go. Uses cgo. 2. bptree - a B+ Tree with duplicate key support (fixed size keys, variable length values) written on top of fmap. 3. slice - used by fmap and bptree to completely violate memory and type safety of Go. 4. errors - just a simple error package which maintains a stack trace with every error.
Package poller is a file-descriptor multiplexer. It allows concurent Read and Write operations from and to multiple file-descriptors without allocating one OS thread for every blocked operation. It operates similarly to Go's netpoller (which multiplexes network connections) without requiring special support from the Go runtime. It can be used with tty devices, character devices, pipes, FIFOs, and any file-descriptor that is poll-able (can be used with select(2), epoll(7), etc.) In addition, package poller allows the user to set timeouts (deadlines) for read and write operations, and also allows for safe cancelation of blocked read and write operations; a Close from another go-routine safely cancels ongoing (blocked) read and write operations. Typical usage All operations on poller FDs are thread-safe; you can use the same FD from multiple go-routines. It is, for example, safe to close a file descriptor blocked on a Read or Write call from another go-routine. Linux systems are supported using an implementations based on epoll(7). For other POSIX systems (that is, most Unix-like systems), a more portable fallback implementation based on select(2) is provided. It has the same semantics as the Linux epoll(7)-based implementation, but is expected to be of lower performance. The select(2)-based implementation uses CGo for some ancillary select-related operations. Ideally, system-specific io-multiplexing or async-io facilities (e.g. kqueue(2), or /dev/poll(7)) should be used to provide higher performance implementations for other systems. Patches for this will be greatly appreciated. If you wish, you can build package poller on Linux to use the select(2)-based implementation instead of the epoll(7) one. To do this define the build-tag "noepoll". Normally, there is no reason to do this.
Package appconfigdata provides the API client, operations, and parameter types for AWS AppConfig Data. AppConfig Data provides the data plane APIs your application uses to retrieve configuration data. Here's how it works: Your application retrieves configuration data by first establishing a configuration session using the AppConfig Data StartConfigurationSessionAPI action. Your session's client then makes periodic calls to GetLatestConfigurationto check for and retrieve the latest data available. When calling StartConfigurationSession , your code sends the following information: Identifiers (ID or name) of an AppConfig application, environment, and configuration profile that the session tracks. (Optional) The minimum amount of time the session's client must wait between calls to GetLatestConfiguration . In response, AppConfig provides an InitialConfigurationToken to be given to the session's client and used the first time it calls GetLatestConfiguration for that session. This token should only be used once in your first call to GetLatestConfiguration . You must use the new token in the GetLatestConfiguration response ( NextPollConfigurationToken ) in each subsequent call to GetLatestConfiguration . When calling GetLatestConfiguration , your client code sends the most recent ConfigurationToken value it has and receives in response: NextPollConfigurationToken : the ConfigurationToken value to use on the next call to GetLatestConfiguration . NextPollIntervalInSeconds : the duration the client should wait before making its next call to GetLatestConfiguration . This duration may vary over the course of the session, so it should be used instead of the value sent on the StartConfigurationSession call. The configuration: the latest data intended for the session. This may be empty if the client already has the latest version of the configuration. The InitialConfigurationToken and NextPollConfigurationToken should only be used once. To support long poll use cases, the tokens are valid for up to 24 hours. If a GetLatestConfiguration call uses an expired token, the system returns BadRequestException . For more information and to view example CLI commands that show how to retrieve a configuration using the AppConfig Data StartConfigurationSession and GetLatestConfiguration API actions, see Retrieving the configuration in the AppConfig User Guide.
Package fuse enables writing FUSE file systems on Linux, OS X, and FreeBSD. On OS X, it requires OSXFUSE (http://osxfuse.github.com/). There are two approaches to writing a FUSE file system. The first is to speak the low-level message protocol, reading from a Conn using ReadRequest and writing using the various Respond methods. This approach is closest to the actual interaction with the kernel and can be the simplest one in contexts such as protocol translators. Servers of synthesized file systems tend to share common bookkeeping abstracted away by the second approach, which is to call fs.Serve to serve the FUSE protocol using an implementation of the service methods in the interfaces FS* (file system), Node* (file or directory), and Handle* (opened file or directory). There are a daunting number of such methods that can be written, but few are required. The specific methods are described in the documentation for those interfaces. The hellofs subdirectory contains a simple illustration of the fs.Serve approach. The required and optional methods for the FS, Node, and Handle interfaces have the general form where Op is the name of a FUSE operation. Op reads request parameters from req and writes results to resp. An operation whose only result is the error result omits the resp parameter. Multiple goroutines may call service methods simultaneously; the methods being called are responsible for appropriate synchronization. The operation must not hold on to the request or response, including any []byte fields such as WriteRequest.Data or SetxattrRequest.Xattr. Operations can return errors. The FUSE interface can only communicate POSIX errno error numbers to file system clients, the message is not visible to file system clients. The returned error can implement ErrorNumber to control the errno returned. Without ErrorNumber, a generic errno (EIO) is returned. Error messages will be visible in the debug log as part of the response. In some file systems, some operations may take an undetermined amount of time. For example, a Read waiting for a network message or a matching Write might wait indefinitely. If the request is cancelled and no longer needed, the context will be cancelled. Blocking operations should select on a receive from ctx.Done() and attempt to abort the operation early if the receive succeeds (meaning the channel is closed). To indicate that the operation failed because it was aborted, return fuse.EINTR. If an operation does not block for an indefinite amount of time, supporting cancellation is not necessary. All requests types embed a Header, meaning that the method can inspect req.Pid, req.Uid, and req.Gid as necessary to implement permission checking. The kernel FUSE layer normally prevents other users from accessing the FUSE file system (to change this, see AllowOther, AllowRoot), but does not enforce access modes (to change this, see DefaultPermissions). Behavior and metadata of the mounted file system can be changed by passing MountOption values to Mount.
Package json implements semantic processing of JSON as specified in RFC 8259. JSON is a simple data interchange format that can represent primitive data types such as booleans, strings, and numbers, in addition to structured data types such as objects and arrays. Marshal and Unmarshal encode and decode Go values to/from JSON text contained within a []byte. MarshalWrite and UnmarshalRead operate on JSON text by writing to or reading from an io.Writer or io.Reader. MarshalEncode and UnmarshalDecode operate on JSON text by encoding to or decoding from a jsontext.Encoder or jsontext.Decoder. Options may be passed to each of the marshal or unmarshal functions to configure the semantic behavior of marshaling and unmarshaling (i.e., alter how JSON data is understood as Go data and vice versa). jsontext.Options may also be passed to the marshal or unmarshal functions to configure the syntactic behavior of encoding or decoding. The data types of JSON are mapped to/from the data types of Go based on the closest logical equivalent between the two type systems. For example, a JSON boolean corresponds with a Go bool, a JSON string corresponds with a Go string, a JSON number corresponds with a Go int, uint or float, a JSON array corresponds with a Go slice or array, and a JSON object corresponds with a Go struct or map. See the documentation on Marshal and Unmarshal for a comprehensive list of how the JSON and Go type systems correspond. Arbitrary Go types can customize their JSON representation by implementing MarshalerV1, MarshalerV2, UnmarshalerV1, or UnmarshalerV2. This provides authors of Go types with control over how their types are serialized as JSON. Alternatively, users can implement functions that match MarshalFuncV1, MarshalFuncV2, UnmarshalFuncV1, or UnmarshalFuncV2 to specify the JSON representation for arbitrary types. This provides callers of JSON functionality with control over how any arbitrary type is serialized as JSON. A Go struct is naturally represented as a JSON object, where each Go struct field corresponds with a JSON object member. When marshaling, all Go struct fields are recursively encoded in depth-first order as JSON object members except those that are ignored or omitted. When unmarshaling, JSON object members are recursively decoded into the corresponding Go struct fields. Object members that do not match any struct fields, also known as “unknown members”, are ignored by default or rejected if RejectUnknownMembers is specified. The representation of each struct field can be customized in the "json" struct field tag, where the tag is a comma separated list of options. As a special case, if the entire tag is `json:"-"`, then the field is ignored with regard to its JSON representation. The first option is the JSON object name override for the Go struct field. If the name is not specified, then the Go struct field name is used as the JSON object name. JSON names containing commas or quotes, or names identical to "" or "-", can be specified using a single-quoted string literal, where the syntax is identical to the Go grammar for a double-quoted string literal, but instead uses single quotes as the delimiters. By default, unmarshaling uses case-sensitive matching to identify the Go struct field associated with a JSON object name. After the name, the following tag options are supported: omitzero: When marshaling, the "omitzero" option specifies that the struct field should be omitted if the field value is zero as determined by the "IsZero() bool" method if present, otherwise based on whether the field is the zero Go value. This option has no effect when unmarshaling. omitempty: When marshaling, the "omitempty" option specifies that the struct field should be omitted if the field value would have been encoded as a JSON null, empty string, empty object, or empty array. This option has no effect when unmarshaling. string: The "string" option specifies that StringifyNumbers be set when marshaling or unmarshaling a struct field value. This causes numeric types to be encoded as a JSON number within a JSON string, and to be decoded from either a JSON number or a JSON string containing a JSON number. This extra level of encoding is often necessary since many JSON parsers cannot precisely represent 64-bit integers. nocase: When unmarshaling, the "nocase" option specifies that if the JSON object name does not exactly match the JSON name for any of the struct fields, then it attempts to match the struct field using a case-insensitive match that also ignores dashes and underscores. If multiple fields match, the first declared field in breadth-first order takes precedence. This takes precedence even if MatchCaseInsensitiveNames is set to false. This cannot be specified together with the "strictcase" option. strictcase: When unmarshaling, the "strictcase" option specifies that the JSON object name must exactly match the JSON name for the struct field. This takes precedence even if MatchCaseInsensitiveNames is set to true. This cannot be specified together with the "nocase" option. inline: The "inline" option specifies that the JSON representable content of this field type is to be promoted as if they were specified in the parent struct. It is the JSON equivalent of Go struct embedding. A Go embedded field is implicitly inlined unless an explicit JSON name is specified. The inlined field must be a Go struct (that does not implement any JSON methods), jsontext.Value, map[string]T, or an unnamed pointer to such types. When marshaling, inlined fields from a pointer type are omitted if it is nil. Inlined fields of type jsontext.Value and map[string]T are called “inlined fallbacks” as they can represent all possible JSON object members not directly handled by the parent struct. Only one inlined fallback field may be specified in a struct, while many non-fallback fields may be specified. This option must not be specified with any other option (including the JSON name). unknown: The "unknown" option is a specialized variant of the inlined fallback to indicate that this Go struct field contains any number of unknown JSON object members. The field type must be a jsontext.Value, map[string]T, or an unnamed pointer to such types. If DiscardUnknownMembers is specified when marshaling, the contents of this field are ignored. If RejectUnknownMembers is specified when unmarshaling, any unknown object members are rejected regardless of whether an inlined fallback with the "unknown" option exists. This option must not be specified with any other option (including the JSON name). format: The "format" option specifies a format flag used to specialize the formatting of the field value. The option is a key-value pair specified as "format:value" where the value must be either a literal consisting of letters and numbers (e.g., "format:RFC3339") or a single-quoted string literal (e.g., "format:'2006-01-02'"). The interpretation of the format flag is determined by the struct field type. The "omitzero" and "omitempty" options are mostly semantically identical. The former is defined in terms of the Go type system, while the latter in terms of the JSON type system. Consequently they behave differently in some circumstances. For example, only a nil slice or map is omitted under "omitzero", while an empty slice or map is omitted under "omitempty" regardless of nilness. The "omitzero" option is useful for types with a well-defined zero value (e.g., net/netip.Addr) or have an IsZero method (e.g., time.Time.IsZero). Every Go struct corresponds to a list of JSON representable fields which is constructed by performing a breadth-first search over all struct fields (excluding unexported or ignored fields), where the search recursively descends into inlined structs. The set of non-inlined fields in a struct must have unique JSON names. If multiple fields all have the same JSON name, then the one at shallowest depth takes precedence and the other fields at deeper depths are excluded from the list of JSON representable fields. If multiple fields at the shallowest depth have the same JSON name, but exactly one is explicitly tagged with a JSON name, then that field takes precedence and all others are excluded from the list. This is analogous to Go visibility rules for struct field selection with embedded struct types. Marshaling or unmarshaling a non-empty struct without any JSON representable fields results in a SemanticError. Unexported fields must not have any `json` tags except for `json:"-"`. Unmarshal matches JSON object names with Go struct fields using a case-sensitive match, but can be configured to use a case-insensitive match with the "nocase" option. This permits unmarshaling from inputs that use naming conventions such as camelCase, snake_case, or kebab-case. By default, JSON object names for Go struct fields are derived from the Go field name, but may be specified in the `json` tag. Due to JSON's heritage in JavaScript, the most common naming convention used for JSON object names is camelCase. The "format" tag option can be used to alter the formatting of certain types. JSON objects can be inlined within a parent object similar to how Go structs can be embedded within a parent struct. The inlining rules are similar to those of Go embedding, but operates upon the JSON namespace. Go struct fields can be omitted from the output depending on either the input Go value or the output JSON encoding of the value. The "omitzero" option omits a field if it is the zero Go value or implements a "IsZero() bool" method that reports true. The "omitempty" option omits a field if it encodes as an empty JSON value, which we define as a JSON null or empty JSON string, object, or array. In many cases, the behavior of "omitzero" and "omitempty" are equivalent. If both provide the desired effect, then using "omitzero" is preferred. The exact order of JSON object can be preserved through the use of a specialized type that implements MarshalerV2 and UnmarshalerV2. Some Go types have a custom JSON representation where the implementation is delegated to some external package. Consequently, the "json" package will not know how to use that external implementation. For example, the google.golang.org/protobuf/encoding/protojson package implements JSON for all google.golang.org/protobuf/proto.Message types. WithMarshalers and WithUnmarshalers can be used to configure "json" and "protojson" to cooperate together. When implementing HTTP endpoints, it is common to be operating with an io.Reader and an io.Writer. The MarshalWrite and UnmarshalRead functions assist in operating on such input/output types. UnmarshalRead reads the entirety of the io.Reader to ensure that io.EOF is encountered without any unexpected bytes after the top-level JSON value. If a type implements encoding.TextMarshaler and/or encoding.TextUnmarshaler, then the MarshalText and UnmarshalText methods are used to encode/decode the value to/from a JSON string. Due to version skew, the set of JSON object members known at compile-time may differ from the set of members encountered at execution-time. As such, it may be useful to have finer grain handling of unknown members. This package supports preserving, rejecting, or discarding such members.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. If the query is LWT and the configured RetryPolicy additionally implements LWTRetryPolicy interface, then the policy will be cast to LWTRetryPolicy and used this way. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package aes implements AES encryption (formerly Rijndael), as defined in U.S. Federal Information Processing Standards Publication 197. The AES operations in this package are not implemented using constant-time algorithms. An exception is when running on systems with enabled hardware support for AES that makes these operations constant-time. Examples include amd64 systems using AES-NI extensions and s390x systems using Message-Security-Assist extensions. On such systems, when the result of NewCipher is passed to cipher.NewGCM, the GHASH operation used by GCM is also constant-time.
Package processex - find a os.Process (operating system process) by Name (FindByName) or PID (Find), crossplatform, lightly, fast and full compatible with stdlib os.Process. Usage Contributing Package processex - find a os.Process (operating system process) by Name (FindByName) or PID (Find), crossplatform, lightly, fast and full compatible with stdlib os.Process. Usage Contributing
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/ryanbressler/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/ryanbressler/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package fuse enables writing FUSE file systems on Linux, OS X, and FreeBSD. On OS X, it requires OSXFUSE (http://osxfuse.github.com/). There are two approaches to writing a FUSE file system. The first is to speak the low-level message protocol, reading from a Conn using ReadRequest and writing using the various Respond methods. This approach is closest to the actual interaction with the kernel and can be the simplest one in contexts such as protocol translators. Servers of synthesized file systems tend to share common bookkeeping abstracted away by the second approach, which is to call fs.Serve to serve the FUSE protocol using an implementation of the service methods in the interfaces FS* (file system), Node* (file or directory), and Handle* (opened file or directory). There are a daunting number of such methods that can be written, but few are required. The specific methods are described in the documentation for those interfaces. The hellofs subdirectory contains a simple illustration of the fs.Serve approach. The required and optional methods for the FS, Node, and Handle interfaces have the general form where Op is the name of a FUSE operation. Op reads request parameters from req and writes results to resp. An operation whose only result is the error result omits the resp parameter. Multiple goroutines may call service methods simultaneously; the methods being called are responsible for appropriate synchronization. The operation must not hold on to the request or response, including any []byte fields such as WriteRequest.Data or SetxattrRequest.Xattr. Operations can return errors. The FUSE interface can only communicate POSIX errno error numbers to file system clients, the message is not visible to file system clients. The returned error can implement ErrorNumber to control the errno returned. Without ErrorNumber, a generic errno (EIO) is returned. Error messages will be visible in the debug log as part of the response. In some file systems, some operations may take an undetermined amount of time. For example, a Read waiting for a network message or a matching Write might wait indefinitely. If the request is cancelled and no longer needed, the context will be cancelled. Blocking operations should select on a receive from ctx.Done() and attempt to abort the operation early if the receive succeeds (meaning the channel is closed). To indicate that the operation failed because it was aborted, return fuse.EINTR. If an operation does not block for an indefinite amount of time, supporting cancellation is not necessary. All requests types embed a Header, meaning that the method can inspect req.Pid, req.Uid, and req.Gid as necessary to implement permission checking. The kernel FUSE layer normally prevents other users from accessing the FUSE file system (to change this, see AllowOther, AllowRoot), but does not enforce access modes (to change this, see DefaultPermissions). Behavior and metadata of the mounted file system can be changed by passing MountOption values to Mount.
Package rootcerts provides a Go conversion of Mozilla's certdata.txt file, extracting trusted CA certificates only. It was generated using the gencerts tool using the following command line: This package allows for the embedding of root CA certificates directly into a Go executable, reducing or negating the need for Go to have access to root certificates provided by the operating system in order to validate certificates issued by those authorities. Root certificates can be accessed through this package, or may be easily installed into the http package's DefaultTransport by calling UpdateDefaultTransport.
Package cgi implements the common gateway interface (CGI) for Caddy, a modern, full-featured, easy-to-use web server. This plugin lets you generate dynamic content on your website by means of command line scripts. To collect information about the inbound HTTP request, your script examines certain environment variables such as PATH_INFO and QUERY_STRING. Then, to return a dynamically generated web page to the client, your script simply writes content to standard output. In the case of POST requests, your script reads additional inbound content from standard input. The advantage of CGI is that you do not need to fuss with server startup and persistence, long term memory management, sockets, and crash recovery. Your script is called when a request matches one of the patterns that you specify in your Caddyfile. As soon as your script completes its response, it terminates. This simplicity makes CGI a perfect complement to the straightforward operation and configuration of Caddy. The benefits of Caddy, including HTTPS by default, basic access authentication, and lots of middleware options extend easily to your CGI scripts. CGI has some disadvantages. For one, Caddy needs to start a new process for each request. This can adversely impact performance and, if resources are shared between CGI applications, may require the use of some interprocess synchronization mechanism such as a file lock. Your server’s responsiveness could in some circumstances be affected, such as when your web server is hit with very high demand, when your script’s dependencies require a long startup, or when concurrently running scripts take a long time to respond. However, in many cases, such as using a pre-compiled CGI application like fossil or a Lua script, the impact will generally be insignificant. Another restriction of CGI is that scripts will be run with the same permissions as Caddy itself. This can sometimes be less than ideal, for example when your script needs to read or write files associated with a different owner. Serving dynamic content exposes your server to more potential threats than serving static pages. There are a number of considerations of which you should be aware when using CGI applications. CGI SCRIPTS SHOULD BE LOCATED OUTSIDE OF CADDY’S DOCUMENT ROOT. Otherwise, an inadvertent misconfiguration could result in Caddy delivering the script as an ordinary static resource. At best, this could merely confuse the site visitor. At worst, it could expose sensitive internal information that should not leave the server. MISTRUST THE CONTENTS OF PATH_INFO, QUERY_STRING AND STANDARD INPUT. Most of the environment variables available to your CGI program are inherently safe because they originate with Caddy and cannot be modified by external users. This is not the case with PATH_INFO, QUERY_STRING and, in the case of POST actions, the contents of standard input. Be sure to validate and sanitize all inbound content. If you use a CGI library or framework to process your scripts, make sure you understand its limitations. An error in a CGI application is generally handled within the application itself and reported in the headers it returns. Additionally, if the Caddy errors directive is enabled, any content the application writes to its standard error stream will be written to the error log. This can be useful to diagnose problems with the execution of the CGI application. Your CGI application can be executed directly or indirectly. In the direct case, the application can be a compiled native executable or it can be a shell script that contains as its first line a shebang that identifies the interpreter to which the file’s name should be passed. Caddy must have permission to execute the application. On Posix systems this will mean making sure the application’s ownership and permission bits are set appropriately; on Windows, this may involve properly setting up the filename extension association. In the indirect case, the name of the CGI script is passed to an interpreter such as lua, perl or python. The basic cgi directive lets you associate a single pattern with a particular script. The directive can be repeated any reasonable number of times. Here is the basic syntax: For example: When a request such as https://example.com/report or https://example.com/report/weekly arrives, the cgi middleware will detect the match and invoke the script named /usr/local/cgi-bin/report. The current working directory will be the same as Caddy itself. Here, it is assumed that the script is self-contained, for example a pre-compiled CGI application or a shell script. Here is an example of a standalone script, similar to one used in the cgi plugin’s test suite: The environment variables PATH_INFO and QUERY_STRING are populated and passed to the script automatically. There are a number of other standard CGI variables included that are described below. If you need to pass any special environment variables or allow any environment variables that are part of Caddy’s process to pass to your script, you will need to use the advanced directive syntax described below. The values used for the script name and its arguments are subject to placeholder replacement. In addition to the standard Caddy placeholders such as {method} and {host}, the following placeholder substitutions are made: - {.} is replaced with Caddy’s current working directory - {match} is replaced with the portion of the request that satisfies the match directive - {root} is replaced with Caddy’s specified root directory You can include glob wildcards in your matches. Basically, an asterisk represents a sequence of zero or more non-slash characters and a question mark represents a single non-slash character. These wildcards can be used multiple times in a match expression. See the documentation for path/Match in the Go standard library for more details about glob matching. Here is an example directive: In this case, the cgi middleware will match requests such as https://example.com/report/weekly.lua and https://example.com/report/report.lua/weekly but not https://example.com/report.lua. The use of the asterisk expands to any character sequence within a directory. For example, if the request is made, the following command is executed: Note that the portion of the request that follows the match is not included. That information is conveyed to the script by means of environment variables. In this example, the Lua interpreter is invoked directly from Caddy, so the Lua script does not need the shebang that would be needed in a standalone script. This method facilitates the use of CGI on the Windows platform. In order to specify custom environment variables, pass along one or more environment variables known to Caddy, or specify more than one match pattern for a given rule, you will need to use the advanced directive syntax. That looks like this: For example, With the advanced syntax, the exec subdirective must appear exactly once. The match subdirective must appear at least once. The env, pass_env, empty_env, and except subdirectives can appear any reasonable number of times. pass_all_env, dir may appear once. The dir subdirective specifies the CGI executable’s working directory. If it is not specified, Caddy’s current working directory is used. The except subdirective uses the same pattern matching logic that is used with the match subdirective except that the request must match a rule fully; no request path prefix matching is performed. Any request that matches a match pattern is then checked with the patterns in except, if any. If any matches are made with the except pattern, the request is rejected and passed along to subsequent handlers. This is a convenient way to have static file resources served properly rather than being confused as CGI applications. The empty_env subdirective is used to pass one or more empty environment variables. Some CGI scripts may expect the server to pass certain empty variables rather than leaving them unset. This subdirective allows you to deal with those situations. The values associated with environment variable keys are all subject to placeholder substitution, just as with the script name and arguments. If your CGI application runs properly at the command line but fails to run from Caddy it is possible that certain environment variables may be missing. For example, the ruby gem loader evidently requires the HOME environment variable to be set; you can do this with the subdirective pass_env HOME. Another class of problematic applications require the COMPUTERNAME variable. The pass_all_env subdirective instructs Caddy to pass each environment variable it knows about to the CGI excutable. This addresses a common frustration that is caused when an executable requires an environment variable and fails without a descriptive error message when the variable cannot be found. These applications often run fine from the command prompt but fail when invoked with CGI. The risk with this subdirective is that a lot of server information is shared with the CGI executable. Use this subdirective only with CGI applications that you trust not to leak this information. If you protect your CGI application with the Caddy JWT middleware, your program will have access to the token’s payload claims by means of environment variables. For example, the following token claims will be available with the following environment variables All values are conveyed as strings, so some conversion may be necessary in your program. No placeholder substitutions are made on these values. If you run into unexpected results with the CGI plugin, you are able to examine the environment in which your CGI application runs. To enter inspection mode, add the subdirective inspect to your CGI configuration block. This is a development option that should not be used in production. When in inspection mode, the plugin will respond to matching requests with a page that displays variables of interest. In particular, it will show the replacement value of {match} and the environment variables to which your CGI application has access. For example, consider this example CGI block: When you request a matching URL, for example, the Caddy server will deliver a text page similar to the following. The CGI application (in this case, wapptclsh) will not be called. This information can be used to diagnose problems with how a CGI application is called. To return to operation mode, remove or comment out the inspect subdirective. In this example, the Caddyfile looks like this: Note that a request for /show gets mapped to a script named /usr/local/cgi-bin/report/gen. There is no need for any element of the script name to match any element of the match pattern. The contents of /usr/local/cgi-bin/report/gen are: The purpose of this script is to show how request information gets communicated to a CGI script. Note that POST data must be read from standard input. In this particular case, posted data gets stored in the variable POST_DATA. Your script may use a different method to read POST content. Secondly, the SCRIPT_EXEC variable is not a CGI standard. It is provided by this middleware and contains the entire command line, including all arguments, with which the CGI script was executed. When a browser requests the response looks like When a client makes a POST request, such as with the following command the response looks the same except for the following lines: The fossil distributed software management tool is a native executable that supports interaction as a CGI application. In this example, /usr/bin/fossil is the executable and /home/quixote/projects.fossil is the fossil repository. To configure Caddy to serve it, use a cgi directive something like this in your Caddyfile: In your /usr/local/cgi-bin directory, make a file named projects with the following single line: The fossil documentation calls this a command file. When fossil is invoked after a request to /projects, it examines the relevant environment variables and responds as a CGI application. If you protect /projects with basic HTTP authentication, you may wish to enable the ALLOW REMOTE_USER AUTHENTICATION option when setting up fossil. This lets fossil dispense with its own authentication, assuming it has an account for the user. The agedu utility can be used to identify unused files that are taking up space on your storage media. Like fossil, it can be used in different modes including CGI. First, use it from the command line to generate an index of a directory, for example In your Caddyfile, include a directive that references the generated index: You will want to protect the /agedu resource with some sort of access control, for example HTTP Basic Authentication. This small example demonstrates how to write a CGI program in Go. The use of a bytes.Buffer makes it easy to report the content length in the CGI header. When this program is compiled and installed as /usr/local/bin/servertime, the following directive in your Caddy file will make it available: The cgit application provides an attractive and useful web interface to git repositories. Here is how to run it with Caddy. After compiling cgit, you can place the executable somewhere out of Caddy’s document root. In this example, it is located in /usr/local/cgi-bin. A sample configuration file is included in the project’s cgitrc.5.txt file. You can use it as a starting point for your configuration. The default location for this file is /etc/cgitrc but in this example the location /home/quixote/caddy/cgitrc. Note that changing the location of this file from its default will necessitate the inclusion of the environment variable CGIT_CONFIG in the Caddyfile cgi directive. When you edit the repository stanzas in this file, be sure each repo.path item refers to the .git directory within a working checkout. Here is an example stanza: Also, you will likely want to change cgit’s cache directory from its default in /var/cache (generally accessible only to root) to a location writeable by Caddy. In this example, cgitrc contains the line You may need to create the cgit subdirectory. There are some static cgit resources (namely, cgit.css, favicon.ico, and cgit.png) that will be accessed from Caddy’s document tree. For this example, these files are placed in a directory named cgit-resource. The following lines are part of the cgitrc file: Additionally, you will likely need to tweak the various file viewer filters such source-filter and about-filter based on your system. The following Caddyfile directive will allow you to access the cgit application at /cgit: Feeling reckless? You can run PHP in CGI mode. In general, FastCGI is the preferred method to run PHP if your application has many pages or a fair amount of database activity. But for small PHP programs that are seldom used, CGI can work fine. You’ll need the php-cgi interpreter for your platform. This may involve downloading the executable or downloading and then compiling the source code. For this example, assume the interpreter is installed as /usr/local/bin/php-cgi. Additionally, because of the way PHP operates in CGI mode, you will need an intermediate script. This one works in Posix environments: This script can be reused for multiple cgi directives. In this example, it is installed as /usr/local/cgi-bin/phpwrap. The argument following -c is your initialization file for PHP. In this example, it is named /home/quixote/.config/php/php-cgi.ini. Two PHP files will be used for this example. The first, /usr/local/cgi-bin/sample/min.php, looks like this: The second, /usr/local/cgi-bin/sample/action.php, follows: The following directive in your Caddyfile will make the application available at sample/min.php: This examples demonstrates printing a CGI rule
The veloci-filter (vfilter) library implements a generic SQL like query language. Overview:: There are many applications in which it is useful to provide a flexible query language for the end user. Velocifilter has the following design goals: - It should be generic and easily adaptable to be used by any project. - It should be fast and efficient. An example makes the use case very clear. Suppose you are writing an archiving application. Most archiving tools require a list of files to be archived (e.g. on the command line). You launch your tool and a user requests a new flag that allows them to specify the files using a glob expression. For example, a user might wish to only select the files ending with the ".go" extension. While on a unix system one might use shell expansion to support this, on other operating systems shell expansion may not work (e.g. on windows). You then add the ability to specify a glob expression directly to your tool (suppose you add the flag --glob). A short while later, a user requires filtering the files to archive by their size - suppose they want to only archive a file smaller than a certain size. You studiously add another set of flags (e.g. --size with a special syntax for greater than or less than semantics). Now a user wishes to be able to combine these conditions logically (e.g. all files with ".go" extension newer than 5 days and smaller than 5kb). Clearly this approach is limited, if we wanted to support every possible use case, our tool would add many flags with a complex syntax making it harder for our users. One approach is to simply rely on the unix "find" tool (with its many obscure flags) to support the file selection problem. This is not ideal either since the find tool may not be present on the system (E.g. on Windows) or may have varying syntax. It may also not support every possible condition the user may have in mind (e.g. files containing a RegExp or files not present in the archive). There has to be a better way. You wish to provide your users with a powerful and flexible way to specify which files to archive, but we do not want to write complicated logic and make our tool more complex to use. This is where velocifilter comes in. By using the library we can provide a single flag where the user may specify a flexible VQL query (Velocidex Query Language - a simplified SQL dialect) allowing the user to specify arbirarily complex filter expressions. For example: SELECT file from glob(pattern=["*.go", "*.py"]) where file.Size < 5000 and file.Mtime < now() - "5 days" Not only does VQL allow for complex logical operators, but it is also efficient and optimized automatically. For example, consider the following query: SELECT file from glob(pattern="*") where grep(file=file, pattern="foobar") and file.Size < 5k The grep() function will open the file and search it for the pattern. If the file is large, this might take a long time. However velocifilter will automatically abort the grep() function if the file size is larger than 5k bytes. Velocifilter correctly handles such cancellations automatically in order to reduce query evaluation latency. Protocols - supporting custom types:: Velocifilter uses a plugin system to allow clients to define how their own custom types behave within the VQL evaluator. Note that this is necessary because Go does not allow an external package to add an interface to an existing type without creating a new type which embeds it. Clients who need to handle the original third party types must have a way to attach new protocols to existing types defined outside their own codebase. Velocifilter achieves this by implementing a registration systen in the Scope{} object. For example, consider a client of the library wishing to pass custom types in queries: Where both Foo and Bar are defined and produced by some other library which our client uses. Suppose our client wishes to allow addition of Foo objects. We would therefore need to implement the AddProtocol interface on Foo structs. Since Foo structs are defined externally we can not simply add a new method to Foo struct (we could embed Foo struct in a new struct, but then we would also need to wrap the bar field to produce an extended Bar. This is typically impractical and not maintainable for heavily nested complex structs). We define a FooAdder{} object which implements the Addition protocol on behalf of the Foo object. Now clients can add this protocol to the scope before evaluating a query: scope := NewScope().AddProtocolImpl(FooAdder{})
Package styx serves network filesystems using the 9P2000 protocol. The styx package provides types and routines for implementing 9P servers. The files served may reflect real files on the host operating system, or an in-memory filesystem, or bi-directional RPC endpoints. Regardless, the protocol operations used to access these files are the same. The ListenAndServe and ListenAndServeTLS functions run 9P servers bound to a TCP port. To create a 9P server, define a type that implements the Serve9P method. The HandlerFunc type allows regular functions to be used: Multiple handlers can be overlaid using the Stack function. Handlers may pass data downstream using a message's WithContext method:
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/master/_examples Validator is designed to be thread-safe and used as a singleton instance. It caches information about your struct and validations, in essence only parsing your validation tags once per struct type. Using multiple instances neglects the benefit of caching. The not thread-safe functions are explicitly marked as such in the documentation. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rgb|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. Allows to skip the validation if the value is nil (same as omitempty, but only for the nil-values). This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For booleans ensures value is not false. For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value when using WithRequiredStructEnabled. The field under validation must be present and not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must not be present or not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must not be present or empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, len will ensure that the value is equal to the duration given in the parameter. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, max will ensure that the value is less than or equal to the duration given in the parameter. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, min will ensure that the value is greater than or equal to the duration given in the parameter. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, eq will ensure that the value is equal to the duration given in the parameter. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, ne will ensure that the value is not equal to the duration given in the parameter. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. To match strings with spaces in them, include the target string between single quotes. Kind of like an 'enum'. Works the same as oneof but is case insensitive and therefore only accepts strings. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gt will ensure that the value is greater than the duration given in the parameter. Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gte will ensure that the value is greater than or equal to the duration given in the parameter. For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lt will ensure that the value is less than the duration given in the parameter. Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lte will ensure that the value is less than or equal to the duration given in the parameter. This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value can successfully be parsed into a boolean with strconv.ParseBool This validates that a string value contains number values only. For integers or float it returns true. This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains only lowercase characters. An empty string is not a valid lowercase string. This validates that a string value contains only uppercase characters. An empty string is not a valid uppercase string. This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid E.164 Phone number https://en.wikipedia.org/wiki/E.164 (ex. +1123456789) This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value is valid JSON This validates that a string value is a valid JWT This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid file path and that the file exists on the machine and is an image. This is done using os.Stat and github.com/gabriel-vasile/mimetype This validates that a string value contains a valid file path but does not validate the existence of that file. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid bas324 value. Although an empty string is valid base32 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value, but without = padding, according the RFC4648 spec, section 3.2. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providing reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value does not start with the supplied string value This validates that a string value does not end with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains a valid ULID value. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid directory but does not validate the existence of that directory. This is done using os.Stat, which is a platform independent function. It is safest to suffix the string with os.PathSeparator if the directory may not exist at the time of validation. This validates that a string value contains a valid DNS hostname and port that can be used to validate fields typically passed to sockets and connections. This validates that a string value is a valid datetime based on the supplied datetime format. Supplied format must match the official Go time format layout as documented in https://golang.org/pkg/time/ This validates that a string value is a valid country code based on iso3166-1 alpha-2 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-3 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-numeric standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid BCP 47 language tag, as parsed by language.Parse. More information on https://pkg.go.dev/golang.org/x/text/language BIC (SWIFT code) This validates that a string value is a valid Business Identifier Code (SWIFT code), defined in ISO 9362. More information on https://www.iso.org/standard/60390.html This validates that a string value is a valid dns RFC 1035 label, defined in RFC 1035. More information on https://datatracker.ietf.org/doc/html/rfc1035 This validates that a string value is a valid time zone based on the time zone database present on the system. Although empty value and Local value are allowed by time.LoadLocation golang function, they are not allowed by this validator. More information on https://golang.org/pkg/time/#LoadLocation This validates that a string value is a valid semver version, defined in Semantic Versioning 2.0.0. More information on https://semver.org/ This validates that a string value is a valid cve id, defined in cve mitre. More information on https://cve.mitre.org/ This validates that a string value contains a valid credit card number using Luhn algorithm. This validates that a string or (u)int value contains a valid checksum using the Luhn algorithm. This validates that a string is a valid 24 character hexadecimal string or valid connection string. Example: This validates that a string value contains a valid cron expression. This validates that a string is valid for use with SpiceDb for the indicated purpose. If no purpose is given, a purpose of 'id' is assumed. Alias Validators and Tags NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
btcd is a full-node bitcoin implementation written in Go. The default options are sane for most users. This means btcd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when btcd starts up. By default, the configuration file is located at ~/.btcd/btcd.conf on POSIX-style operating systems and %LOCALAPPDATA%\btcd\btcd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options: