Package centrifuge is a real-time messaging library that abstracts several bidirectional transports (Websocket and its emulation over HTTP-Streaming, SSE/EventSource) and provides primitives to build scalable real-time applications with Go. It's also used as a core of Centrifugo server (https://github.com/centrifugal/centrifugo). Centrifuge library provides several features on top of plain Websocket implementation and comes with several client SDKs – see more details in the library README on GitHub – https://github.com/centrifugal/centrifuge. The API of this library is almost all goroutine-safe except cases where one-time operations like setting callback handlers performed, also your code inside event handlers should be synchronized since event handlers can be called concurrently. Library expects that code inside event handlers will not block. See more information about client connection lifetime and event handler order/concurrency in README on GitHub. Also check out examples in repo to see main library concepts in action.
A data pipeline processing engine. See the README for more complete examples and guides. Code Organization: The pipeline package provides an API for how nodes can be connected to form a pipeline. The individual implementations of each node exist in this kapacitor package. The reason for the separation is to keep the exported API from the pipeline package clean as it is consumed via the TICKscripts (a DSL for Kapacitor). Other Concepts: Stream vs Batch -- Use of the word 'stream' indicates data arrives a single data point at a time. Use of the word 'batch' indicates data arrives in sets or batches or data points. Task -- A task represents a concrete workload to perform. It consists of a pipeline and an identifying name. Basic CRUD operations can be performed on tasks. Task Master -- Responsible for executing a task in a specific environment. Replay -- Replays static datasets against tasks.
Package capnp is a Cap'n Proto library for Go. https://capnproto.org/ Read the Getting Started guide for a tutorial on how to use this package. https://github.com/capnproto/go-capnproto2/wiki/Getting-Started capnpc-go provides the compiler backend for capnp. capnpc-go requires two annotations for all files: package and import. package is needed to know what package to place at the head of the generated file and what identifier to use when referring to the type from another package. import should be the fully qualified import path and is used to generate import statement from other packages and to detect when two types are in the same package. For example: For adding documentation comments to the generated code, there's the doc annotation. This annotation adds the comment to a struct, enum or field so that godoc will pick it up. For example: In Cap'n Proto, the unit of communication is a message. A message consists of one or more segments -- contiguous blocks of memory. This allows large messages to be split up and loaded independently or lazily. Typically you will use one segment per message. Logically, a message is organized in a tree of objects, with the root always being a struct (as opposed to a list or primitive). Messages can be read from and written to a stream. The Message and Segment types are the main types that application code will use from this package. The Message type has methods for marshaling and unmarshaling its segments to the wire format. If the application needs to read or write from a stream, it should use the Encoder and Decoder types. The type for a generic reference to a Cap'n Proto object is Ptr. A Ptr can refer to a struct, a list, or an interface. Ptr, Struct, List, and Interface (the pointer types) have value semantics and refer to data in a single segment. All of the pointer types have a notion of "valid". An invalid pointer will return the default value from any accessor and panic when any setter is called. In previous versions of this package, the Pointer interface was used instead of the Ptr struct. This interface and functions that use it are now deprecated. See https://github.com/capnproto/go-capnproto2/wiki/New-Ptr-Type for details about this API change. Data accessors and setters (i.e. struct primitive fields and list elements) do not return errors, but pointer accessors and setters do. There are a few reasons that a read or write of a pointer can fail, but the most common are bad pointers or allocation failures. For accessors, an invalid object will be returned in case of an error. Since Go doesn't have generics, wrapper types provide type safety on lists. This package provides lists of basic types, and capnpc-go generates list wrappers for named types. However, if you need to use deeper nesting of lists (e.g. List(List(UInt8))), you will need to use a PointerList and wrap the elements. For the following schema: capnpc-go will generate: For each group a typedef is created with a different method set for just the groups fields: generates the following: That way the following may be used to access a field in a group: Note that group accessors just convert the type and so have no overhead. Named unions are treated as a group with an inner unnamed union. Unnamed unions generate an enum Type_Which and a corresponding Which() function: generates the following: Which() should be checked before using the getters, and the default case must always be handled. Setters for single values will set the union discriminator as well as set the value. For voids in unions, there is a void setter that just sets the discriminator. For example: generates the following: Similarly, for groups in unions, there is a group setter that just sets the discriminator. This must be called before the group getter can be used to set values. For example: and in usage: capnpc-go generates enum values as constants. For example in the capnp file: In the generated capnp.go file: In addition an enum.String() function is generated that will convert the constants to a string for debugging or logging purposes. By default, the enum name is used as the tag value, but the tags can be customized with a $Go.tag or $Go.notag annotation. For example: In the generated go file: capnpc-go generates type-safe Client wrappers for interfaces. For parameter lists and result lists, structs are generated as described above with the names Interface_method_Params and Interface_method_Results, unless a single struct type is used. For example, for this interface: capnpc-go generates the following Go code (along with the structs Calculator_evaluate_Params and Calculator_evaluate_Results): capnpc-go also generates code to implement the interface: Since a single capability may want to implement many interfaces, you can use multiple *_Methods functions to build a single slice to send to NewServer. An example of combining the client/server code to communicate with a locally implemented Calculator: A note about message ordering: when implementing a server method, you are responsible for acknowledging delivery of a method call. Failure to do so can cause deadlocks. See the server.Ack function for more details.
Package stomp provides operations that allow communication with a message broker that supports the STOMP protocol. STOMP is the Streaming Text-Oriented Messaging Protocol. See http://stomp.github.com/ for more details. This package provides support for all STOMP protocol features in the STOMP protocol specifications, versions 1.0, 1.1 and 1.2. These features including protocol negotiation, heart-beating, value encoding, and graceful shutdown. Connecting to a STOMP server is achieved using the stomp.Dial function, or the stomp.Connect function. See the examples section for a summary of how to use these functions. Both functions return a stomp.Conn object for subsequent interaction with the STOMP server. Once a connection (stomp.Conn) is created, it can be used to send messages to the STOMP server, or create subscriptions for receiving messages from the STOMP server. Transactions can be created to send multiple messages and/ or acknowledge multiple received messages from the server in one, atomic transaction. The examples section has examples of using subscriptions and transactions. The client program can instruct the stomp.Conn to gracefully disconnect from the STOMP server using the Disconnect method. This will perform a graceful shutdown sequence as specified in the STOMP specification. Source code and other details for the project are available at GitHub:
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package goavro is a library that encodes and decodes of Avro data. It provides an interface to encode data directly to io.Writer streams, and to decode data from io.Reader streams. Goavro fully adheres to version 1.7.7 of the Avro specification and data encoding.
Package s3gof3r provides fast, parallelized, streaming access to Amazon S3. It includes a command-line interface: `gof3r`.
Package digest provides a generalized type to opaquely represent message digests and their operations within the registry. The Digest type is designed to serve as a flexible identifier in a content-addressable system. More importantly, it provides tools and wrappers to work with hash.Hash-based digests with little effort. The format of a digest is simply a string with two parts, dubbed the "algorithm" and the "digest", separated by a colon: An example of a sha256 digest representation follows: The "algorithm" portion defines both the hashing algorithm used to calculate the digest and the encoding of the resulting digest, which defaults to "hex" if not otherwise specified. Currently, all supported algorithms have their digests encoded in hex strings. In the example above, the string "sha256" is the algorithm and the hex bytes are the "digest". Because the Digest type is simply a string, once a valid Digest is obtained, comparisons are cheap, quick and simple to express with the standard equality operator. The main benefit of using the Digest type is simple verification against a given digest. The Verifier interface, modeled after the stdlib hash.Hash interface, provides a common write sink for digest verification. After writing is complete, calling the Verifier.Verified method will indicate whether or not the stream of bytes matches the target digest. In addition to the above, we intend to add the following features to this package: 1. A Digester type that supports write sink digest calculation. 2. Suspend and resume of ongoing digest calculations to support efficient digest verification in the registry.
Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API, so it is helpful to be familiar with http://www.sqlite.org/c3ref/intro.html. An SQLite connection is represented by a *sqlite.Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (using Open to create a *sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache: https://www.sqlite.org/sharedcache.html The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite3 compiled in are: FTS5, RTree, JSON1, Session, GeoPoly This is not a database/sql driver. Statements are prepared with the Prepare and PrepareTransient methods. When using Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) To write a blob, first use an INSERT statement to set the size of the blob and assign a rowid: Use BindZeroBlob or SetZeroBlob to set the size of myblob. Then you can open the blob with: Every connection can have a done channel associated with it using the SetInterrupt method. This is typically the channel returned by a context.Context Done method. For example, a timeout can be associated with a connection session: As database connections are long-lived, the SetInterrupt method can be called multiple times to reset the associated lifetime. When using pools, the shorthand for associating a context with a connection is: SQLite transactions have to be managed manually with this package by directly calling BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE/ ROLLBACK. The sqlitex has a Savepoint function that helps automate this. Using a Pool to execute SQL in a concurrent HTTP handler. For helper functions that make some kinds of statements easier to write see the sqlitex package.
Package portaudio applies Go bindings to the PortAudio library. For the most part, these bindings parallel the underlying PortAudio API; please refer to http://www.portaudio.com/docs.html for details. Differences introduced by the bindings are documented here: Instead of passing a flag to OpenStream, audio sample formats are inferred from the signature of the stream callback or, for a blocking stream, from the types of the buffers. See the StreamCallback and Buffer types for details. Blocking I/O: Read and Write do not accept buffer arguments; instead they use the buffers (or pointers to buffers) provided to OpenStream. The number of samples to read or write is determined by the size of the buffers. The StreamParameters struct combines parameters for both the input and the output device as well as the sample rate, buffer size, and flags.
Package rekognition provides the API client, operations, and parameter types for Amazon Rekognition. This is the API Reference for Amazon Rekognition Image, Amazon Rekognition Custom Labels, Amazon Rekognition Stored Video, Amazon Rekognition Streaming Video. It provides descriptions of actions, data types, common parameters, and common errors. AssociateFaces CompareFaces CreateCollection CreateUser DeleteCollection DeleteFaces DeleteUser DescribeCollection DetectFaces DetectLabels DetectModerationLabels DetectProtectiveEquipment DetectText DisassociateFaces GetCelebrityInfo GetMediaAnalysisJob IndexFaces ListCollections ListMediaAnalysisJob ListFaces ListUsers RecognizeCelebrities SearchFaces SearchFacesByImage SearchUsers SearchUsersByImage StartMediaAnalysisJob CopyProjectVersion CreateDataset CreateProject CreateProjectVersion DeleteDataset DeleteProject DeleteProjectPolicy DeleteProjectVersion DescribeDataset DescribeProjects DescribeProjectVersions DetectCustomLabels DistributeDatasetEntries ListDatasetEntries ListDatasetLabels ListProjectPolicies PutProjectPolicy StartProjectVersion StopProjectVersion UpdateDatasetEntries GetCelebrityRecognition GetContentModeration GetFaceDetection GetFaceSearch GetLabelDetection GetPersonTracking GetSegmentDetection GetTextDetection StartCelebrityRecognition StartContentModeration StartFaceDetection StartFaceSearch StartLabelDetection StartPersonTracking StartSegmentDetection StartTextDetection CreateStreamProcessor DeleteStreamProcessor DescribeStreamProcessor ListStreamProcessors StartStreamProcessor StopStreamProcessor UpdateStreamProcessor
Package capnp is a Cap'n Proto library for Go. https://capnproto.org/ Read the Getting Started guide for a tutorial on how to use this package. https://github.com/capnproto/go-capnproto2/wiki/Getting-Started capnpc-go provides the compiler backend for capnp. capnpc-go requires two annotations for all files: package and import. package is needed to know what package to place at the head of the generated file and what identifier to use when referring to the type from another package. import should be the fully qualified import path and is used to generate import statement from other packages and to detect when two types are in the same package. For example: For adding documentation comments to the generated code, there's the doc annotation. This annotation adds the comment to a struct, enum or field so that godoc will pick it up. For example: In Cap'n Proto, the unit of communication is a message. A message consists of one or more segments -- contiguous blocks of memory. This allows large messages to be split up and loaded independently or lazily. Typically you will use one segment per message. Logically, a message is organized in a tree of objects, with the root always being a struct (as opposed to a list or primitive). Messages can be read from and written to a stream. The Message and Segment types are the main types that application code will use from this package. The Message type has methods for marshaling and unmarshaling its segments to the wire format. If the application needs to read or write from a stream, it should use the Encoder and Decoder types. The type for a generic reference to a Cap'n Proto object is Ptr. A Ptr can refer to a struct, a list, or an interface. Ptr, Struct, List, and Interface (the pointer types) have value semantics and refer to data in a single segment. All of the pointer types have a notion of "valid". An invalid pointer will return the default value from any accessor and panic when any setter is called. In previous versions of this package, the Pointer interface was used instead of the Ptr struct. This interface and functions that use it are now deprecated. See https://github.com/capnproto/go-capnproto2/wiki/New-Ptr-Type for details about this API change. Data accessors and setters (i.e. struct primitive fields and list elements) do not return errors, but pointer accessors and setters do. There are a few reasons that a read or write of a pointer can fail, but the most common are bad pointers or allocation failures. For accessors, an invalid object will be returned in case of an error. Since Go doesn't have generics, wrapper types provide type safety on lists. This package provides lists of basic types, and capnpc-go generates list wrappers for named types. However, if you need to use deeper nesting of lists (e.g. List(List(UInt8))), you will need to use a PointerList and wrap the elements. For the following schema: capnpc-go will generate: For each group a typedef is created with a different method set for just the groups fields: generates the following: That way the following may be used to access a field in a group: Note that group accessors just convert the type and so have no overhead. Named unions are treated as a group with an inner unnamed union. Unnamed unions generate an enum Type_Which and a corresponding Which() function: generates the following: Which() should be checked before using the getters, and the default case must always be handled. Setters for single values will set the union discriminator as well as set the value. For voids in unions, there is a void setter that just sets the discriminator. For example: generates the following: Similarly, for groups in unions, there is a group setter that just sets the discriminator. This must be called before the group getter can be used to set values. For example: and in usage: capnpc-go generates enum values as constants. For example in the capnp file: In the generated capnp.go file: In addition an enum.String() function is generated that will convert the constants to a string for debugging or logging purposes. By default, the enum name is used as the tag value, but the tags can be customized with a $Go.tag or $Go.notag annotation. For example: In the generated go file: capnpc-go generates type-safe Client wrappers for interfaces. For parameter lists and result lists, structs are generated as described above with the names Interface_method_Params and Interface_method_Results, unless a single struct type is used. For example, for this interface: capnpc-go generates the following Go code (along with the structs Calculator_evaluate_Params and Calculator_evaluate_Results): capnpc-go also generates code to implement the interface: Since a single capability may want to implement many interfaces, you can use multiple *_Methods functions to build a single slice to send to NewServer. An example of combining the client/server code to communicate with a locally implemented Calculator: A note about message ordering: by default, only one method per server will be invoked at a time; when implementing a server method which blocks or takes a long time, you calling the server.Go function to unblock future calls.
Package shlex implements a simple lexer which splits input in to tokens using shell-style rules for quoting and commenting. The basic use case uses the default ASCII lexer to split a string into sub-strings: To process a stream of strings: To access the raw token stream (which includes tokens for comments):
Package vellum is a library for building, serializing and executing an FST (finite state transducer). There are two distinct phases, building an FST and using it. When building an FST, you insert keys ([]byte) and their associated value (uint64). Insert operations MUST be done in lexicographic order. While building the FST, data is streamed to an underlying Writer. At the conclusion of building, you MUST call Close() on the builder. After completion of the build phase, you can either Open() the FST if you serialized it to disk. Alternatively, if you already have the bytes in memory, you can use Load(). By default, Open() will use mmap to avoid loading the entire file into memory. Once the FST is ready, you can use the Contains() method to see if a keys is in the FST. You can use the Get() method to see if a key is in the FST and retrieve it's associated value. And, you can use the Iterator method to enumerate key/value pairs within a specified range.
Package sdk provides REST and Stream clients for Tinkoff Invest OpenAPI. More documentation https://api-invest.tinkoff.ru/openapi/docs/.
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorKind field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package dicom provides a set of tools to read, write, and generally work with DICOM (https://dicom.nema.org/) medical image files in Go. dicom.Parse and dicom.Write provide the core functionality to read and write DICOM Datasets. This package provides Go data structures that represent DICOM concepts (for example, dicom.Dataset and dicom.Element). These structures will pretty-print by default and are JSON serializable out of the box. This package provides some advanced functionality as well, including: streaming image frames to an output channel, reading elements one-by-one (like an iterator pattern), flat iteration over nested elements in a Dataset, and more. General usage is simple. Check out the package examples below and some function specific examples. It may also be helpful to take a look at the example cmd/dicomutil program, which is a CLI built around this library to save out image frames from DICOMs and print out metadata to STDOUT.
Package kinesisvideo provides the API client, operations, and parameter types for Amazon Kinesis Video Streams.
Package eventsource implements a client and server to allow streaming data one-way over a HTTP connection using the Server-Sent Events API http://dev.w3.org/html5/eventsource/ The client and server respect the Last-Event-ID header. If the Repository interface is implemented on the server, events can be replayed in case of a network disconnection.
Package smux is a multiplexing library for Golang. It relies on an underlying connection to provide reliability and ordering, such as TCP or KCP, and provides stream-oriented multiplexing over a single channel.
Package stan is a Go client for the NATS Streaming messaging system (https://nats.io). Package stan is a Go client for the NATS Streaming messaging system (https://nats.io).
Package stan implements the CloudEvent transport implementation using NATS Streaming.
Package saltpack is an implementation of the saltpack message format. Saltpack is a light wrapper around Dan Berstein's famous NaCl library. It adds support for longer messages, streaming input and output of data, multiple recipients for encrypted messages, and a reasonable armoring format. We intend Saltpack as a replacement for the PGP messaging format, as it can be used in many of the same circumstances. However, it is designed to be: (1) simpler; (2) easier to implement; (3) judicious (perhaps judgmental) in its crypto usage; (4) fully modern (no CFB mode here); (5) high performance; (6) less bug- prone; (7) generally unwilling to output unauthenticated data; and (8) easier to compose with other software in any manner of languages or platforms. Saltpack makes no attempt to manage keys. We assume the wrapping application has a story for key management. Saltpack supports three modes of operation: encrypted messages, attached signatures, and detached signatures. Encrypted messages use NaCl's authenticated public-key encryption; we add repudiable authentication. An attached signature contains a message and a signature that authenticates it. A detached signature contains just the signature, and assumes an independent delievery mechanism for the file (this might come up when distributing an ISO and separate signature of the file). Saltpack has two encoding modes: binary and armored. In armored mode, saltpack outputs in Base62-encoding, suitable for publication into any manner of Web settings without fear of markup-caused mangling. This saltpack library implementation supports two API patterns: streaming and all-at-once. The former is useful for large files that can't fit into memory; the latter is more convenient. Both produce the same output. See https://saltpack.org
Package kinesisanalyticsv2 provides the API client, operations, and parameter types for Amazon Kinesis Analytics. Amazon Managed Service for Apache Flink was previously known as Amazon Kinesis Data Analytics for Apache Flink. Amazon Managed Service for Apache Flink is a fully managed service that you can use to process and analyze streaming data using Java, Python, SQL, or Scala. The service enables you to quickly author and run Java, SQL, or Scala code against streaming sources to perform time series analytics, feed real-time dashboards, and create real-time metrics.
Package yarpc provides the YARPC service framework. With hundreds to thousands of services communicating with RPC, transport protocols (like HTTP and TChannel), encoding protocols (like JSON or Thrift), and peer choosers are the concepts that vary year over year. Separating these concerns allows services to change transports and wire protocols without changing call sites or request handlers, build proxies and wire protocol bridges, or experiment with load balancing strategies. YARPC is a toolkit for services and proxies. YARPC breaks RPC into interchangeable encodings, transports, and peer choosers. YARPC for Go provides reference implementations for HTTP/1.1, TChannel and gRPC transports, and also raw, JSON, Thrift, and Protobuf encodings. YARPC for Go provides a round robin peer chooser and experimental implementations for debug pages and rate limiting. YARPC for Go plans to provide a load balancer that uses a least-pending-requests strategy. Peer choosers can implement any strategy, including load balancing or sharding, in turn bound to any peer list updater. Regardless of transport, every RPC has some common properties: caller name, service name, procedure name, encoding name, deadline or TTL, headers, baggage (multi-hop headers), and tracing. Each RPC can also have an optional shard key, routing key, or routing delegate for advanced routing. YARPC transports use a shared API for capturing RPC metadata, so middleware can apply to requests over any transport. Each YARPC transport protocol can implement inbound handlers and outbound callers. Each of these can support different RPC types, like unary (request and response) or oneway (request and receipt) RPC. A future release of YARPC will add support for other RPC types including variations on streaming and pubsub.
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package grpcreplay supports the capture and replay of gRPC calls. Its main goal is to improve testing. Once you capture the calls of a test that runs against a real service, you have an "automatic mock" that can be replayed against the same test, yielding a unit test that is fast and flake-free. To record a sequence of gRPC calls to a file, create a Recorder and pass its DialOptions to grpc.Dial: It is essential to close the Recorder when the interaction is finished. There is also a NewRecorderWriter function for capturing to an arbitrary io.Writer. To replay a captured file, create a Replayer and ask it for a (fake) connection. We don't actually have to dial a server. (Since we're reading the file and not writing it, we don't have to be as careful about the error returned from Close). A test might use random or time-sensitive values, for instance to create unique resources for isolation from other tests. The test therefore has initial values, such as the current time, or a random seed, that differ from run to run. You must record this initial state and re-establish it on replay. To record the initial state, serialize it into a []byte and pass it as the second argument to NewRecorder: On replay, get the bytes from Replayer.Initial: Recorders and replayers have support for running callbacks before messages are written to or read from the replay file. A Recorder has a BeforeFunc that can modify a request or response before it is written to the replay file. The actual RPCs sent to the service during recording remain unaltered; only what is saved in the replay file can be changed. A Replayer has a BeforeFunc that can modify a request before it is sent for matching. Example uses for these callbacks include customized logging, or scrubbing data before RPCs are written to the replay file. If requests are modified by the callbacks during recording, it is important to perform the same modifications to the requests when replaying, or RPC matching on replay will fail. A common way to analyze and modify the various messages is to use a type switch. A nondeterministic program may invoke RPCs in a different order each time it is run. The order in which RPCs are called during recording may differ from the order during replay. The replayer matches incoming to recorded requests by method name and request contents, so nondeterminism is only a concern for identical requests that result in different responses. A nondeterministic program whose behavior differs depending on the order of such RPCs probably has a race condition: since both the recorded sequence of RPCs and the sequence during replay are valid orderings, the program should behave the same under both. The same is not true of streaming RPCs. The replayer matches streams only by method name, since it has no other information at the time the stream is opened. Two streams with the same method name that are started concurrently may replay in the wrong order. Besides the differences in replay mentioned above, other differences may cause issues for some programs. We list them here. The Replayer delivers a response to an RPC immediately, without waiting for other incoming RPCs. This can violate causality. For example, in a Pub/Sub program where one goroutine publishes and another subscribes, during replay the Subscribe call may finish before the Publish call begins. For streaming RPCs, the Replayer delivers the result of Send and Recv calls in the order they were recorded. No attempt is made to match message contents. At present, this package does not record or replay stream headers and trailers, or the result of the CloseSend method.
Package flac provides access to FLAC (Free Lossless Audio Codec) streams. A brief introduction of the FLAC stream format [1] follows. Each FLAC stream starts with a 32-bit signature ("fLaC"), followed by one or more metadata blocks, and then one or more audio frames. The first metadata block (StreamInfo) describes the basic properties of the audio stream and it is the only mandatory metadata block. Subsequent metadata blocks may appear in an arbitrary order. Please refer to the documentation of the meta [2] and the frame [3] packages for a brief introduction of their respective formats. Note: the Encoder API is experimental until the 1.1.x release. As such, it's API is expected to change.
Package stomp provides operations that allow communication with a message broker that supports the STOMP protocol. STOMP is the Streaming Text-Oriented Messaging Protocol. See http://stomp.github.com/ for more details. This package provides support for all STOMP protocol features in the STOMP protocol specifications, versions 1.0, 1.1 and 1.2. These features including protocol negotiation, heart-beating, value encoding, and graceful shutdown. Connecting to a STOMP server is achieved using the stomp.Dial function, or the stomp.Connect function. See the examples section for a summary of how to use these functions. Both functions return a stomp.Conn object for subsequent interaction with the STOMP server. Once a connection (stomp.Conn) is created, it can be used to send messages to the STOMP server, or create subscriptions for receiving messages from the STOMP server. Transactions can be created to send multiple messages and/ or acknowledge multiple received messages from the server in one, atomic transaction. The examples section has examples of using subscriptions and transactions. The client program can instruct the stomp.Conn to gracefully disconnect from the STOMP server using the Disconnect method. This will perform a graceful shutdown sequence as specified in the STOMP specification. Source code and other details for the project are available at GitHub:
Package nrnats instruments https://github.com/nats-io/nats.go. This package can be used to simplify instrumenting NATS publishers and subscribers. Currently due to the nature of the NATS framework we are limited to two integration points: `StartPublishSegment` for publishers, and `SubWrapper` for subscribers. To generate an external segment for any method that publishes or responds to a NATS message, use the `StartPublishSegment` method. The resulting segment will also need to be ended. Example: Or: StartPublishSegment can be used with a NATS Streamming Connection as well (https://github.com/nats-io/stan.go). Use the `NatsConn()` method on the `stan.Conn` interface (https://godoc.org/github.com/nats-io/stan#Conn) to access the `nats.Conn` object. The `nrnats.SubWrapper` function can be used to wrap the function for `nats.Subscribe` (https://godoc.org/github.com/nats-io/go-nats#Conn.Subscribe or https://godoc.org/github.com/nats-io/go-nats#EncodedConn.Subscribe) and `nats.QueueSubscribe` (https://godoc.org/github.com/nats-io/go-nats#Conn.QueueSubscribe or https://godoc.org/github.com/nats-io/go-nats#EncodedConn.QueueSubscribe) If the `newrelic.Application` parameter is non-nil, it will create a `newrelic.Transaction` and end the transaction when the passed function is complete. Example: Full Publisher/Subscriber example: https://github.com/newrelic/go-agent/blob/master/v3/integrations/nrnats/examples/main.go
Package stan provides a NATS Streaming broker
Package appstream provides the API client, operations, and parameter types for Amazon AppStream. This is the Amazon AppStream 2.0 API Reference. This documentation provides descriptions and syntax for each of the actions and data types in AppStream 2.0. AppStream 2.0 is a fully managed, secure application streaming service that lets you stream desktop applications to users without rewriting applications. AppStream 2.0 manages the AWS resources that are required to host and run your applications, scales automatically, and provides access to your users on demand. You can call the AppStream 2.0 API operations by using an interface VPC endpoint (interface endpoint). For more information, see Access AppStream 2.0 API Operations and CLI Commands Through an Interface VPC Endpointin the Amazon AppStream 2.0 Administration Guide. To learn more about AppStream 2.0, see the following resources: Amazon AppStream 2.0 product page Amazon AppStream 2.0 documentation
Package streams specifies interfaces to be implemented by the streaming connectors and operators.
Package cmds helps building both standalone and client-server applications. The basic building blocks are requests, commands, emitters and responses. A command consists of a description of the parameters and a function. The function is passed the request as well as an emitter as arguments. It does operations on the inputs and sends the results to the user by emitting them. There are a number of emitters in this package and subpackages, but the user is free to create their own. A command is a struct containing the commands help text, a description of the arguments and options, the command's processing function and a type to let the caller know what type will be emitted. Optionally one of the functions PostRun and Encoder may be defined that consumes the function's emitted values and generates a visual representation for e.g. the terminal. Encoders work on a value-by-value basis, while PostRun operates on the value stream. An emitter has the Emit method, that takes the command's function's output as an argument and passes it to the user. The command's function does not know what kind of emitter it works with, so the same function may run locally or on a server, using an rpc interface. Emitters can also send errors using the SetError method. The user-facing emitter usually is the cli emitter. Values emitter here will be printed to the terminal using either the Encoders or the PostRun function. A response is a value that the user can read emitted values from. Responses have a method Next() that returns the next emitted value and an error value. If the last element has been received, the returned error value is io.EOF. If the application code has sent an error using SetError, the error ErrRcvdError is returned on next, indicating that the caller should call Error(). Depending on the reponse type, other errors may also occur. Pipes are pairs (emitter, response), such that a value emitted on the emitter can be received in the response value. Most builtin emitters are "pipe" emitters. The most prominent examples are the channel pipe and the http pipe. The channel pipe is backed by a channel. The only error value returned by the response is io.EOF, which happens when the channel is closed. The http pipe is backed by an http connection. The response can also return other errors, e.g. if there are errors on the network. To get a better idea of what's going on, take a look at the examples at https://github.com/ipfs/go-ipfs-cmds/tree/master/examples.
Package fwd provides a buffered reader and writer. Each has methods that help improve the encoding/decoding performance of some binary protocols. The Writer and Reader type provide similar functionality to their counterparts in bufio, plus a few extra utility methods that simplify read-ahead and write-ahead. I wrote this package to improve serialization performance for http://github.com/tinylib/msgp, where it provided about a 2x speedup over `bufio` for certain workloads. However, care must be taken to understand the semantics of the extra methods provided by this package, as they allow the user to access and manipulate the buffer memory directly. The extra methods for Reader are Reader.Peek, Reader.Skip and Reader.Next. (*fwd.Reader).Peek, unlike (*bufio.Reader).Peek, will re-allocate the read buffer in order to accommodate arbitrarily large read-ahead. (*fwd.Reader).Skip skips the next 'n' bytes in the stream, and uses the io.Seeker interface if the underlying stream implements it. (*fwd.Reader).Next returns a slice pointing to the next 'n' bytes in the read buffer (like Reader.Peek), but also increments the read position. This allows users to process streams in arbitrary block sizes without having to manage appropriately-sized slices. Additionally, obviating the need to copy the data from the buffer to another location in memory can improve performance dramatically in CPU-bound applications. Writer only has one extra method, which is (*fwd.Writer).Next, which returns a slice pointing to the next 'n' bytes of the writer, and increments the write position by the length of the returned slice. This allows users to write directly to the end of the buffer.
Package eventstreamtesting implements helper utilities for event stream protocol testing.
Package vellum is a library for building, serializing and executing an FST (finite state transducer). There are two distinct phases, building an FST and using it. When building an FST, you insert keys ([]byte) and their associated value (uint64). Insert operations MUST be done in lexicographic order. While building the FST, data is streamed to an underlying Writer. At the conclusion of building, you MUST call Close() on the builder. After completion of the build phase, you can either Open() the FST if you serialized it to disk. Alternatively, if you already have the bytes in memory, you can use Load(). By default, Open() will use mmap to avoid loading the entire file into memory. Once the FST is ready, you can use the Contains() method to see if a keys is in the FST. You can use the Get() method to see if a key is in the FST and retrieve it's associated value. And, you can use the Iterator method to enumerate key/value pairs within a specified range.
Package connect is a slim RPC framework built on Protocol Buffers and net/http. In addition to supporting its own protocol, Connect handlers and clients are wire-compatible with gRPC and gRPC-Web, including streaming. This documentation is intended to explain each type and function in isolation. Walkthroughs, FAQs, and other narrative docs are available on the Connect website, and there's a working demonstration service on Github.
Package uplink is the main entrypoint to interacting with Storj Labs' decentralized storage network. Sign up for an account on a Satellite today! https://storj.io/ The fundamental unit of access in the Storj Labs storage network is the Access Grant. An access grant is a serialized structure that is internally comprised of an API Key, a set of encryption key information, and information about which Storj Labs or Tardigrade network Satellite is responsible for the metadata. An access grant is always associated with exactly one Project on one Satellite. If you don't already have an access grant, you will need make an account on a Satellite, generate an API Key, and encapsulate that API Key with encryption information into an access grant. If you don't already have an account on a Satellite, first make one at https://storj.io/ and note the Satellite you choose (such as us1.storj.io, eu1.storj.io, etc). Then, make an API Key in the web interface. The first step to any project is to generate a restricted access grant with the minimal permissions that are needed. Access grants contains all encryption information and they should be restricted as much as possible. To make an access grant, you can create one using our Uplink CLI tool's 'share' subcommand (after setting up the Uplink CLI tool), or you can make one as follows: In the above example, 'serializedAccess' is a human-readable string that represents read-only access to just the "logs" bucket, and is only able to decrypt that one bucket thanks to hierarchical deterministic key derivation. Note: RequestAccessWithPassphrase is CPU-intensive, and your application's normal lifecycle should avoid it and use ParseAccess where possible instead. To revoke an access grant see the Project.RevokeAccess method. A common architecture for building applications is to have a single bucket for the entire application to store the objects of all users. In such architecture, it is of utmost importance to guarantee that users can access only their objects but not the objects of other users. This can be achieved by implementing an app-specific authentication service that generates an access grant for each user by restricting the main access grant of the application. This user-specific access grant is restricted to access the objects only within a specific key prefix defined for the user. When initialized, the authentication server creates the main application access grant with an empty passphrase as follows. The authentication service does not hold any encryption information about users, so the passphrase used to request the main application access grant does not matter. The encryption keys related to user objects will be overridden in a next step on the client-side. It is important that once set to a specific value, this passphrase never changes in the future. Therefore, the best practice is to use an empty passphrase. Whenever a user is authenticated, the authentication service generates the user-specific access grant as follows: The userID is something that uniquely identifies the users in the application and must never change. Along with the user access grant, the authentication service should return a user-specific salt. The salt must be always the same for this user. The salt size is 16-byte or 32-byte. Once the application receives the user-specific access grant and the user-specific salt from the authentication service, it has to override the encryption key in the access grant, so users can encrypt and decrypt their files with encryption keys derived from their passphrase. The user-specific access grant is now ready to use by the application. Once you have a valid access grant, you can open a Project with the access that access grant allows for. Projects allow you to manage buckets and objects within buckets. A bucket represents a collection of objects. You can upload, download, list, and delete objects of any size or shape. Objects within buckets are represented by keys, where keys can optionally be listed using the "/" delimiter. Note: Objects and object keys within buckets are end-to-end encrypted, but bucket names themselves are not encrypted, so the billing interface on the Satellite can show you bucket line items. Objects support a couple kilobytes of arbitrary key/value metadata, and arbitrary-size primary data streams with the ability to read at arbitrary offsets. If you want to access only a small subrange of the data you uploaded, you can use `uplink.DownloadOptions` to specify the download range. Listing objects returns an iterator that allows to walk through all the items:
Package bitstream is a simple wrapper around a io.Reader and io.Writer to provide bit-level access to the stream.
Package chacha20 implements the ChaCha20 / XChaCha20 stream chipher. Notice that one specific key-nonce combination must be unique for all time. There are three versions of ChaCha20: - ChaCha20 with a 64 bit nonce (en/decrypt up to 2^64 * 64 bytes for one key-nonce combination) - ChaCha20 with a 96 bit nonce (en/decrypt up to 2^32 * 64 bytes (~256 GB) for one key-nonce combination) - XChaCha20 with a 192 bit nonce (en/decrypt up to 2^64 * 64 bytes for one key-nonce combination)
Package graw is a high level, easy to use, Reddit bot library. graw will take a low level handle from the graw/reddit package and manage everything for you. You just specify in a config what events you want to listen to on Reddit, and graw will take care of maintaining the event stream and calling the handler methods of your bot whenever new events occur. Announcing new posts in /r/self is as simple as: graw can handle many event sources on Reddit; see the Config struct for the complete set of offerings. graw has one other function that behaves like Scan(), which is Run(). Scan() is for logged-out bots (what Reddit calls "scripts"). Run() handles logged in bots, which can subscribe to logged-in event sources in the bot's account inbox like mentions and private messages.
Package mp4ff - MP4 media file parser and writer for AVC and HEVC video, AAC audio and stpp/wvtt subtitles. Focused on fragmented files as used for streaming in DASH, MSS and HLS fMP4. The mp4 library has functions for parsing (called Decode) and writing (called Encode). It is focused on fragmented files as used for streaming in DASH, MSS and HLS fMP4. mp4.File is a representation of a "File" which can be more or less complete, but should have some top layer boxes. It can include The typical child boxes are exported so that one can write paths such as to access the (only) trun box of a moof box. Some simple command line tools are available in cmd directory. Example code is available in the examples directory.
Package redisqueue provides a producer and consumer of a queue that uses Redis streams (https://redis.io/topics/streams-intro). The features of this package include: Here's an example of a producer that inserts 1000 messages into a queue: And here's an example of a consumer that reads the messages off of that queue:
Package multistream implements a simple stream router for the multistream-select protocoli. The protocol is defined at https://github.com/multiformats/multistream-select