Package transcribestreaming provides the API client, operations, and parameter types for Amazon Transcribe Streaming Service. Amazon Transcribe streaming offers three main types of real-time transcription: Standard, Medical, and Call Analytics. Standard transcriptions are the most common option. Refer to for details. Medical transcriptions are tailored to medical professionals and incorporate medical terms. A common use case for this service is transcribing doctor-patient dialogue in real time, so doctors can focus on their patient instead of taking notes. Refer to for details. Call Analytics transcriptions are designed for use with call center audio on two different channels; if you're looking for insight into customer service calls, use this option. Refer to for details.
Package cloud is the root of the packages used to access Google Cloud Services. See https://pkg.go.dev/cloud.google.com/go for a full list of sub-modules. All clients in sub-packages are configurable via client options. These options are described here: https://pkg.go.dev/google.golang.org/api/option. Endpoint configuration is used to specify the URL to which requests are sent. It is used for services that support or require regional endpoints, as well as for other use cases such as testing against fake servers. For example, the Vertex AI service recommends that you configure the endpoint to the location with the features you want that is closest to your physical location or the location of your users. There is no global endpoint for Vertex AI. See Vertex AI - Locations for more details. The following example demonstrates configuring a Vertex AI client with a regional endpoint: All of the clients support authentication via Google Application Default Credentials, or by providing a JSON key file for a Service Account. See examples below. Google Application Default Credentials (ADC) is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://cloud.google.com/docs/authentication/production. If you have your environment configured correctly you will not need to pass any extra information to the client libraries. Here is an example of a client using ADC to authenticate: You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.cloud.google.com/iam-admin/serviceaccounts. This example uses the Secret Manger client, but the same steps apply to the all other client libraries this package as well. Example: In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. This example uses the Secret Manager client, but the same steps apply to all other client libraries as well. Note that scopes can be found at https://developers.google.com/identity/protocols/oauth2/scopes, and are also provided in all auto-generated libraries: for example, cloud.google.com/go/secretmanager/apiv1 provides DefaultAuthScopes. Example: By default, non-streaming methods, like Create or Get, will have a default deadline applied to the context provided at call time, unless a context deadline is already set. Streaming methods have no default deadline and will run indefinitely. To set timeouts or arrange for cancellation, use context. Transient errors will be retried when correctness allows. Here is an example of setting a timeout for an RPC using context.WithTimeout: Here is an example of setting a timeout for an RPC using github.com/googleapis/gax-go/v2.WithTimeout: Here is an example of how to arrange for an RPC to be canceled, use context.WithCancel: Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Regardless of which transport is used, request headers can be set in the same way using [`callctx.SetHeaders`]setheaders. Here is a generic example: ## Google-reserved headers There are a some header keys that Google reserves for internal use that must not be ovewritten. The following header keys are broadly considered reserved and should not be conveyed by client library users unless instructed to do so: * `x-goog-api-client` * `x-goog-request-params` Be sure to check the individual package documentation for other service-specific reserved headers. For example, Storage supports a specific auditing header that is mentioned in that [module's documentation]storagedocs. ## Google Cloud system parameters Google Cloud services respect system parameterssystem parameters that can be used to augment request and/or response behavior. For the most part, they are not needed when using one of the enclosed client libraries. However, those that may be necessary are made available via the [`callctx`]callctx package. If not present there, consider opening an issue on that repo to request a new constant. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport by default. For gRPC clients, connection pooling is configurable. Users of Cloud Client Libraries may specify option.WithGRPCConnectionPool(n) as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and accessed in a round robin fashion. Minimal container images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/googleapis/google-cloud-go/issues/928 for more information. For tips on how to write tests against code that calls into our libraries check out our Debugging Guide. For tips on how to write tests against code that calls into our libraries check out our Testing Guide. Most of the errors returned by the generated clients are wrapped in an github.com/googleapis/gax-go/v2/apierror.APIError and can be further unwrapped into a google.golang.org/grpc/status.Status or google.golang.org/api/googleapi.Error depending on the transport used to make the call (gRPC or REST). Converting your errors to these types can be a useful way to get more information about what went wrong while debugging. APIError gives access to specific details in the error. The transport-specific errors can still be unwrapped using the APIError. If the gRPC transport was used, the google.golang.org/grpc/status.Status can still be parsed using the google.golang.org/grpc/status.FromError function. Semver is used to communicate stability of the sub-modules of this package. Note, some stable sub-modules do contain packages, and sometimes features, that are considered unstable. If something is unstable it will be explicitly labeled as such. Example of package does in an unstable package: Clients that contain alpha and beta in their import path may change or go away without notice. Clients marked stable will maintain compatibility with future versions for as long as we can reasonably sustain. Incompatible changes might be made in some situations, including:
Package pubsub provides an easy way to publish and receive Google Cloud Pub/Sub messages, hiding the details of the underlying server RPCs. Pub/Sub is a many-to-many, asynchronous messaging system that decouples senders and receivers. More information about Pub/Sub is available at https://cloud.google.com/pubsub/docs See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Pub/Sub messages are published to topics. A Topic may be created using Client.CreateTopic like so: Messages may then be published to a Topic: Topic.Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub service. Topic.Publish returns a PublishResult, which behaves like a future: its Get method blocks until the message has been sent to the service. The first time you call Topic.Publish on a Topic, goroutines are started in the background. To clean up these goroutines, call Topic.Stop: To receive messages published to a Topic, clients create a Subscription for the topic. There may be more than one subscription per topic ; each message that is published to the topic will be delivered to all associated subscriptions. A Subscription may be created like so: Messages are then consumed from a Subscription via callback. The callback is invoked concurrently by multiple goroutines, maximizing throughput. To terminate a call to Subscription.Receive, cancel its context. Once client code has processed the Message, it must call Message.Ack or Message.Nack; otherwise the Message will eventually be redelivered. Ack/Nack MUST be called within the Subscription.Receive handler function, and not from a goroutine. Otherwise, flow control (e.g. ReceiveSettings.MaxOutstandingMessages) will not be respected, and messages can get orphaned when cancelling Receive. If the client cannot or doesn't want to process the message, it can call Message.Nack to speed redelivery. For more information and configuration options, see Ack Deadlines below. Note: It is possible for a Message to be redelivered even if Message.Ack has been called. Client code must be robust to multiple deliveries of messages. Note: This uses pubsub's streaming pull feature. This feature has properties that may be surprising. Please take a look at https://cloud.google.com/pubsub/docs/pull#streamingpull for more details on how streaming pull behaves compared to the synchronous pull method. The number of StreamingPull connections can be configured by setting NumGoroutines in ReceiveSettings. The default value of 10 means the client library will maintain 10 StreamingPull connections. This is more than sufficient for most use cases, as StreamingPull connections can handle up to 10 MB/s https://cloud.google.com/pubsub/quotas#resource_limits. In some cases, using too many streams can lead to client library behaving poorly as the application becomes I/O bound. By default, the number of connections in the gRPC conn pool is min(4,GOMAXPROCS). Each connection supports up to 100 streams. Thus, if you have 4 or more CPU cores, the default setting allows a maximum of 400 streams which is already excessive for most use cases. If you want to change the limits on the number of streams, you can change the number of connections in the gRPC connection pool as shown below: The default pubsub deadlines are suitable for most use cases, but may be overridden. This section describes the tradeoffs that should be considered when overriding the defaults. Behind the scenes, each message returned by the Pub/Sub server has an associated lease, known as an "ack deadline". Unless a message is acknowledged within the ack deadline, or the client requests that the ack deadline be extended, the message will become eligible for redelivery. As a convenience, the pubsub client will automatically extend deadlines until either: Ack deadlines are extended periodically by the client. The period between extensions, as well as the length of the extension, automatically adjusts based on the time it takes the subscriber application to ack messages (based on the 99th percentile of ack latency). By default, this extension period is capped at 10m, but this limit can be configured by the "MaxExtensionPeriod" setting. This has the effect that subscribers that process messages quickly have their message ack deadlines extended for a short amount, whereas subscribers that process message slowly have their message ack deadlines extended for a large amount. The net effect is fewer RPCs sent from the client library. For example, consider a subscriber that takes 3 minutes to process each message. Since the library has already recorded several 3-minute "ack latencies"s in a percentile distribution, future message extensions are sent with a value of 3 minutes, every 3 minutes. Suppose the application crashes 5 seconds after the library sends such an extension: the Pub/Sub server would wait the remaining 2m55s before re-sending the messages out to other subscribers. Please note that by default, the client library does not use the subscription's AckDeadline for the MaxExtension value. For use cases where message processing exceeds 30 minutes, we recommend using the base client in a pull model, since long-lived streams are periodically killed by firewalls. See the example at https://godoc.org/cloud.google.com/go/pubsub/apiv1#example-SubscriberClient-Pull-LengthyClientProcessing To use an emulator with this library, you can set the PUBSUB_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Pub/Sub. You can then create and use a client as usual:
`grpc_middleware` is a collection of gRPC middleware packages: interceptors, helpers and tools. gRPC is a fantastic RPC middleware, which sees a lot of adoption in the Golang world. However, the upstream gRPC codebase is relatively bare bones. This package, and most of its child packages provides commonly needed middleware for gRPC: client-side interceptors for retires, server-side interceptors for input validation and auth, functions for chaining said interceptors, metadata convenience methods and more. By default, gRPC doesn't allow one to have more than one interceptor either on the client nor on the server side. `grpc_middleware` provides convenient chaining methods Simple way of turning a multiple interceptors into a single interceptor. Here's an example for server chaining: These interceptors will be executed from left to right: logging, monitoring and auth. Here's an example for client side chaining: These interceptors will be executed from left to right: monitoring and then retry logic. The retry interceptor will call every interceptor that follows it whenever when a retry happens. Implementing your own interceptor is pretty trivial: there are interfaces for that. But the interesting bit exposing common data to handlers (and other middleware), similarly to HTTP Middleware design. For example, you may want to pass the identity of the caller from the auth interceptor all the way to the handling function. For example, a client side interceptor example for auth looks like: Unfortunately, it's not as easy for streaming RPCs. These have the `context.Context` embedded within the `grpc.ServerStream` object. To pass values through context, a wrapper (`WrappedServerStream`) is needed. For example:
Package gopacket provides packet decoding for the Go language. gopacket contains many sub-packages with additional functionality you may find useful, including: Also, if you're looking to dive right into code, see the examples subdirectory for numerous simple binaries built using gopacket libraries. Minimum go version required is 1.5 except for pcapgo/EthernetHandle, afpacket, and bsdbpf which need at least 1.7 due to x/sys/unix dependencies. gopacket takes in packet data as a []byte and decodes it into a packet with a non-zero number of "layers". Each layer corresponds to a protocol within the bytes. Once a packet has been decoded, the layers of the packet can be requested from the packet. Packets can be decoded from a number of starting points. Many of our base types implement Decoder, which allow us to decode packets for which we don't have full data. Most of the time, you won't just have a []byte of packet data lying around. Instead, you'll want to read packets in from somewhere (file, interface, etc) and process them. To do that, you'll want to build a PacketSource. First, you'll need to construct an object that implements the PacketDataSource interface. There are implementations of this interface bundled with gopacket in the gopacket/pcap and gopacket/pfring subpackages... see their documentation for more information on their usage. Once you have a PacketDataSource, you can pass it into NewPacketSource, along with a Decoder of your choice, to create a PacketSource. Once you have a PacketSource, you can read packets from it in multiple ways. See the docs for PacketSource for more details. The easiest method is the Packets function, which returns a channel, then asynchronously writes new packets into that channel, closing the channel if the packetSource hits an end-of-file. You can change the decoding options of the packetSource by setting fields in packetSource.DecodeOptions... see the following sections for more details. gopacket optionally decodes packet data lazily, meaning it only decodes a packet layer when it needs to handle a function call. Lazily-decoded packets are not concurrency-safe. Since layers have not all been decoded, each call to Layer() or Layers() has the potential to mutate the packet in order to decode the next layer. If a packet is used in multiple goroutines concurrently, don't use gopacket.Lazy. Then gopacket will decode the packet fully, and all future function calls won't mutate the object. By default, gopacket will copy the slice passed to NewPacket and store the copy within the packet, so future mutations to the bytes underlying the slice don't affect the packet and its layers. If you can guarantee that the underlying slice bytes won't be changed, you can use NoCopy to tell gopacket.NewPacket, and it'll use the passed-in slice itself. The fastest method of decoding is to use both Lazy and NoCopy, but note from the many caveats above that for some implementations either or both may be dangerous. During decoding, certain layers are stored in the packet as well-known layer types. For example, IPv4 and IPv6 are both considered NetworkLayer layers, while TCP and UDP are both TransportLayer layers. We support 4 layers, corresponding to the 4 layers of the TCP/IP layering scheme (roughly anagalous to layers 2, 3, 4, and 7 of the OSI model). To access these, you can use the packet.LinkLayer, packet.NetworkLayer, packet.TransportLayer, and packet.ApplicationLayer functions. Each of these functions returns a corresponding interface (gopacket.{Link,Network,Transport,Application}Layer). The first three provide methods for getting src/dst addresses for that particular layer, while the final layer provides a Payload function to get payload data. This is helpful, for example, to get payloads for all packets regardless of their underlying data type: A particularly useful layer is ErrorLayer, which is set whenever there's an error parsing part of the packet. Note that we don't return an error from NewPacket because we may have decoded a number of layers successfully before running into our erroneous layer. You may still be able to get your Ethernet and IPv4 layers correctly, even if your TCP layer is malformed. gopacket has two useful objects, Flow and Endpoint, for communicating in a protocol independent manner the fact that a packet is coming from A and going to B. The general layer types LinkLayer, NetworkLayer, and TransportLayer all provide methods for extracting their flow information, without worrying about the type of the underlying Layer. A Flow is a simple object made up of a set of two Endpoints, one source and one destination. It details the sender and receiver of the Layer of the Packet. An Endpoint is a hashable representation of a source or destination. For example, for LayerTypeIPv4, an Endpoint contains the IP address bytes for a v4 IP packet. A Flow can be broken into Endpoints, and Endpoints can be combined into Flows: Both Endpoint and Flow objects can be used as map keys, and the equality operator can compare them, so you can easily group together all packets based on endpoint criteria: For load-balancing purposes, both Flow and Endpoint have FastHash() functions, which provide quick, non-cryptographic hashes of their contents. Of particular importance is the fact that Flow FastHash() is symmetric: A->B will have the same hash as B->A. An example usage could be: This allows us to split up a packet stream while still making sure that each stream sees all packets for a flow (and its bidirectional opposite). If your network has some strange encapsulation, you can implement your own decoder. In this example, we handle Ethernet packets which are encapsulated in a 4-byte header. See the docs for Decoder and PacketBuilder for more details on how coding decoders works, or look at RegisterLayerType and RegisterEndpointType to see how to add layer/endpoint types to gopacket. TLDR: DecodingLayerParser takes about 10% of the time as NewPacket to decode packet data, but only for known packet stacks. Basic decoding using gopacket.NewPacket or PacketSource.Packets is somewhat slow due to its need to allocate a new packet and every respective layer. It's very versatile and can handle all known layer types, but sometimes you really only care about a specific set of layers regardless, so that versatility is wasted. DecodingLayerParser avoids memory allocation altogether by decoding packet layers directly into preallocated objects, which you can then reference to get the packet's information. A quick example: The important thing to note here is that the parser is modifying the passed in layers (eth, ip4, ip6, tcp) instead of allocating new ones, thus greatly speeding up the decoding process. It's even branching based on layer type... it'll handle an (eth, ip4, tcp) or (eth, ip6, tcp) stack. However, it won't handle any other type... since no other decoders were passed in, an (eth, ip4, udp) stack will stop decoding after ip4, and only pass back [LayerTypeEthernet, LayerTypeIPv4] through the 'decoded' slice (along with an error saying it can't decode a UDP packet). Unfortunately, not all layers can be used by DecodingLayerParser... only those implementing the DecodingLayer interface are usable. Also, it's possible to create DecodingLayers that are not themselves Layers... see layers.IPv6ExtensionSkipper for an example of this. By default, DecodingLayerParser uses native map to store and search for a layer to decode. Though being versatile, in some cases this solution may be not so optimal. For example, if you have only few layers faster operations may be provided by sparse array indexing or linear array scan. To accomodate these scenarios, DecodingLayerContainer interface is introduced along with its implementations: DecodingLayerSparse, DecodingLayerArray and DecodingLayerMap. You can specify a container implementation to DecodingLayerParser with SetDecodingLayerContainer method. Example: To skip one level of indirection (though sacrificing some capabilities) you may also use DecodingLayerContainer as a decoding tool as it is. In this case you have to handle unknown layer types and layer panics by yourself. Example: DecodingLayerSparse is the fastest but most effective when LayerType values that layers in use can decode are not large because otherwise that would lead to bigger memory footprint. DecodingLayerArray is very compact and primarily usable if the number of decoding layers is not big (up to ~10-15, but please do your own benchmarks). DecodingLayerMap is the most versatile one and used by DecodingLayerParser by default. Please refer to tests and benchmarks in layers subpackage to further examine usage examples and performance measurements. You may also choose to implement your own DecodingLayerContainer if you want to make use of your own internal packet decoding logic. As well as offering the ability to decode packet data, gopacket will allow you to create packets from scratch, as well. A number of gopacket layers implement the SerializableLayer interface; these layers can be serialized to a []byte in the following manner: SerializeTo PREPENDS the given layer onto the SerializeBuffer, and they treat the current buffer's Bytes() slice as the payload of the serializing layer. Therefore, you can serialize an entire packet by serializing a set of layers in reverse order (Payload, then TCP, then IP, then Ethernet, for example). The SerializeBuffer's SerializeLayers function is a helper that does exactly that. To generate a (empty and useless, because no fields are set) Ethernet(IPv4(TCP(Payload))) packet, for example, you can run: If you use gopacket, you'll almost definitely want to make sure gopacket/layers is imported, since when imported it sets all the LayerType variables and fills in a lot of interesting variables/maps (DecodersByLayerName, etc). Therefore, it's recommended that even if you don't use any layers functions directly, you still import with:
Package excelize providing a set of functions that allow you to write to and read from XLAM / XLSM / XLSX / XLTM / XLTX files. Supports reading and writing spreadsheet documents generated by Microsoft Excel™ 2007 and later. Supports complex components by high compatibility, and provided streaming API for generating or reading data from a worksheet with huge amounts of data. This library needs Go version 1.18 or later. See https://xuri.me/excelize for more information about this package.
Package excelize providing a set of functions that allow you to write to and read from XLSX / XLSM / XLTM files. Supports reading and writing spreadsheet documents generated by Microsoft Excel™ 2007 and later. Supports complex components by high compatibility, and provided streaming API for generating or reading data from a worksheet with huge amounts of data. This library needs Go version 1.15 or later. See https://xuri.me/excelize for more information about this package.
Package ws implements a client and server for the WebSocket protocol as specified in RFC 6455. The main purpose of this package is to provide simple low-level API for efficient work with protocol. Overview. Upgrade to WebSocket (or WebSocket handshake) can be done in two ways. The first way is to use `net/http` server: The second and much more efficient way is so-called "zero-copy upgrade". It avoids redundant allocations and copying of not used headers or other request data. User decides by himself which data should be copied. For customization details see `ws.Upgrader` documentation. After WebSocket handshake you can work with connection in multiple ways. That is, `ws` does not force the only one way of how to work with WebSocket: As you can see, it stream friendly: Or: For more info see the documentation.
Package middleware `middleware` is a collection of gRPC middleware packages: interceptors, helpers and tools. gRPC is a fantastic RPC middleware, which sees a lot of adoption in the Golang world. However, the upstream gRPC codebase is relatively bare bones. This package, and most of its child packages provides commonly needed middleware for gRPC: client-side interceptors for retires, server-side interceptors for input validation and auth, functions for chaining said interceptors, metadata convenience methods and more. Simple way of turning a multiple interceptors into a single interceptor. Here's an example for server chaining: These interceptors will be executed from left to right: logging, monitoring and auth. Here's an example for client side chaining: These interceptors will be executed from left to right: monitoring and then retry logic. The retry interceptor will call every interceptor that follows it whenever when a retry happens. Implementing your own interceptor is pretty trivial: there are interfaces for that. But the interesting bit exposing common data to handlers (and other middleware), similarly to HTTP Middleware design. For example, you may want to pass the identity of the caller from the auth interceptor all the way to the handling function. For example, a client side interceptor example for auth looks like: Unfortunately, it's not as easy for streaming RPCs. These have the `context.Context` embedded within the `grpc.ServerStream` object. To pass values through context, a wrapper (`WrappedServerStream`) is needed. For example:
Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients. The middleware consists of three components. A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as a method on a type for a stateful Policy. Note that HTTP requests made via the same pipeline share the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to avoid race conditions. A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers, and/or query parameters, inject a failure, etc. Once the Policy has successfully completed its request work, it must call the Next() method on the *policy.Request instance in order to pass the request to the next Policy in the chain. When an HTTP response comes back, the Policy then gets a chance to process the response/error. The Policy instance can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response body, etc. Once the Policy has successfully completed its response work, it must return the *http.Response and error instances to its caller. Template for implementing a stateless Policy: Template for implementing a stateful Policy: The Transporter interface is responsible for sending the HTTP request and returning the corresponding HTTP response or error. The Transporter is invoked by the last Policy in the chain. The default Transporter implementation uses a shared http.Client from the standard library. The same stateful/stateless rules for Policy implementations apply to Transporter implementations. To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function. The specified Policy instances form a chain and are invoked in the order provided to NewPipeline followed by the Transporter. Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method. The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter instances. The response/error is then sent through the same chain of Policy instances in reverse order. For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with TransportA. The flow of Request and Response looks like the following: The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request. It also contains some internal state and provides various convenience methods. You create a Request instance by calling the runtime.NewRequest function: If the Request should contain a body, call the SetBody method. A seekable stream is required so that upon retry, the retry Policy instance can seek the stream back to the beginning before retrying the network request and re-uploading the body. Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted. This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as a means to resolve the ambiguity between a field to be excluded and its zero-value. In the above example, Name and Count are defined as pointer-to-type to disambiguate between a missing value (nil) and a zero-value (0) which might have semantic differences. In a PATCH operation, any fields left as nil are to have their values preserved. When updating a Widget's count, one simply specifies the new value for Count, leaving Name nil. To fulfill the requirement for sending a JSON null, the NullValue() function can be used. This sends an explict "null" for Count, indicating that any current value for Count should be deleted. When the HTTP response is received, the *http.Response is returned directly. Each Policy instance can inspect/mutate the *http.Response. To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program. By default the logger writes to stderr. This can be customized by calling log.SetListener, providing a callback that writes to the desired location. Any custom logging implementation MUST provide its own synchronization to handle concurrent invocations. See the docs for the log package for further details. Pageable operations return potentially large data sets spread over multiple GET requests. The result of each GET is a "page" of data consisting of a slice of items. Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T]. The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages and determining if there are more pages to fetch. No IO calls are made until the NextPage() method is invoked. Long-running operations (LROs) are operations consisting of an initial request to start the operation followed by polling to determine when the operation has reached a terminal state. An LRO's terminal state is one of the following values. LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T]. When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started. It does _not_ mean that the widget has been created or updated (or failed to be created/updated). The *runtime.Poller[T] provides APIs for determining the state of the LRO. To wait for the LRO to complete, call the PollUntilDone() method. The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the context is canceled/timed out. Note that LROs can take anywhere from several seconds to several minutes. The duration is operation-dependent. Due to this variant behavior, pollers do _not_ have a preconfigured time-out. Use a context with the appropriate cancellation mechanism as required. Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to recreate the poller. This is achieved via the runtime.Poller[T].ResumeToken() method. Note that a token can only be obtained for a poller that's in a non-terminal state. Also note that any subsequent calls to poller.Poll() might change the poller's state. In this case, a new token should be created. After the token has been obtained, it can be used to recreate an instance of the originating poller. When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken. Resume tokens are unique per service client and operation. Attempting to resume a poller for LRO BeginB() with a token from LRO BeginA() will result in an error. The fake package contains types used for constructing in-memory fake servers used in unit tests. This allows writing tests to cover various success/error conditions without the need for connecting to a live service. Please see https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes.
This example is a quick-starter and demonstrates how to get started using the Azure Blob Storage SDK for Go. This example shows how to perform operations on blob conditionally. This example shows how to download a large stream with intelligent retries. Specifically, if the connection fails while reading, continuing to read from this stream initiates a new GetBlob call passing a range that starts from the last byte successfully read before the failure.
Package kcp-go is a Reliable-UDP library for golang. This library intents to provide a smooth, resilient, ordered, error-checked and anonymous delivery of streams over UDP packets. The interfaces of this package aims to be compatible with net.Conn in standard library, but offers powerful features for advanced users.
Package kcp-go is a Reliable-UDP library for golang. This library intents to provide a smooth, resilient, ordered, error-checked and anonymous delivery of streams over UDP packets. The interfaces of this package aims to be compatible with net.Conn in standard library, but offers powerful features for advanced users.
Package dynamodbstreams provides the API client, operations, and parameter types for Amazon DynamoDB Streams. Amazon DynamoDB Streams provides API actions for accessing streams and processing stream records. To learn more about application development with Streams, see Capturing Table Activity with DynamoDB Streamsin the Amazon DynamoDB Developer Guide.
Package kinesis provides the API client, operations, and parameter types for Amazon Kinesis. Amazon Kinesis Data Streams is a managed service that scales elastically for real-time processing of streaming big data.
Package pubsublite provides an easy way to publish and receive messages using the Pub/Sub Lite service. Google Pub/Sub services are designed to provide reliable, many-to-many, asynchronous messaging between applications. Publisher applications can send messages to a topic and other applications can subscribe to that topic to receive the messages. By decoupling senders and receivers, Google Pub/Sub allows developers to communicate between independently written applications. Compared to Cloud Pub/Sub, Pub/Sub Lite provides partitioned data storage with predefined throughput and storage capacity. Guidance on how to choose between Cloud Pub/Sub and Pub/Sub Lite is available at https://cloud.google.com/pubsub/docs/choosing-pubsub-or-lite. More information about Pub/Sub Lite is available at https://cloud.google.com/pubsub/lite. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Examples can be found at https://pkg.go.dev/cloud.google.com/go/pubsublite#pkg-examples and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#pkg-examples. Complete sample programs can be found at https://github.com/GoogleCloudPlatform/golang-samples/tree/master/pubsublite. The cloud.google.com/go/pubsublite/pscompat subpackage contains clients for publishing and receiving messages, which have similar interfaces to their pubsub.Topic and pubsub.Subscription counterparts in cloud.google.com/go/pubsub. The following examples demonstrate how to declare common interfaces: https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewPublisherClient-Interface and https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat#example-NewSubscriberClient-Interface. The following imports are required for code snippets below: Messages are published to topics. Pub/Sub Lite topics may be created like so: Close must be called to release resources when an AdminClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/topics for more information about how Pub/Sub Lite topics are configured. See https://cloud.google.com/pubsub/lite/docs/locations for the list of locations where Pub/Sub Lite is available. Pub/Sub Lite uses gRPC streams extensively for high throughput. For more differences, see https://pkg.go.dev/cloud.google.com/go/pubsublite/pscompat. To publish messages to a topic, first create a PublisherClient: Then call Publish: Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub Lite service. Thresholds for batching can be configured in PublishSettings. Publish returns a PublishResult, which behaves like a future; its Get method blocks until the message has been sent (or has failed to be sent) to the service: Once you've finishing publishing all messages, call Stop to flush all messages to the service and close gRPC streams. The PublisherClient can no longer be used after it has been stopped or has terminated due to a permanent error. PublisherClients are expected to be long-lived and used for the duration of the application, rather than for publishing small batches of messages. Stop must be called to release resources when a PublisherClient is no longer required. See https://cloud.google.com/pubsub/lite/docs/publishing for more information about publishing. To receive messages published to a topic, create a subscription to the topic. There may be more than one subscription per topic; each message that is published to the topic will be delivered to all of its subscriptions. Pub/Sub Lite subscriptions may be created like so: See https://cloud.google.com/pubsub/lite/docs/subscriptions for more information about how subscriptions are configured. To receive messages for a subscription, first create a SubscriberClient: Messages are then consumed from a subscription via callback. The callback may be invoked concurrently by multiple goroutines (one per partition that the subscriber client is connected to). Receive blocks until either the context is canceled or a permanent error occurs. To terminate a call to Receive, cancel its context: Clients must call pubsub.Message.Ack() or pubsub.Message.Nack() for every message received. Pub/Sub Lite does not have ACK deadlines. Pub/Sub Lite also does not actually have the concept of NACK. The default behavior terminates the SubscriberClient. In Pub/Sub Lite, only a single subscriber for a given subscription is connected to any partition at a time, and there is no other client that may be able to handle messages. See https://cloud.google.com/pubsub/lite/docs/subscribing for more information about receiving messages. Pub/Sub Lite utilizes gRPC streams extensively. gRPC allows a maximum of 100 streams per connection. Internally, the library uses a default connection pool size of 8, which supports up to 800 topic partitions. To alter the connection pool size, pass a ClientOption to pscompat.NewPublisherClient and pscompat.NewSubscriberClient:
Package snappy implements the Snappy compression format. It aims for very high speeds and reasonable compression. There are actually two Snappy formats: block and stream. They are related, but different: trying to decompress block-compressed data as a Snappy stream will fail, and vice versa. The block format is the Decode and Encode functions and the stream format is the Reader and Writer types. The block format, the more common case, is used when the complete size (the number of bytes) of the original data is known upfront, at the time compression starts. The stream format, also known as the framing format, is for when that isn't always true. The canonical, C++ implementation is at https://github.com/google/snappy and it only implements the block format.
Watermill is a Golang library for working efficiently with message streams. It is intended for building event driven applications, enabling event sourcing, RPC over messages, sagas and basically whatever else comes to your mind. You can use conventional pub/sub implementations like Kafka or RabbitMQ, but also HTTP or MySQL binlog if that fits your use case. Website with detailed documentation: https://watermill.io/ Getting started guide: https://watermill.io/docs/getting-started/
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Package kafka provides the API client, operations, and parameter types for Managed Streaming for Kafka. The operations for managing an Amazon MSK cluster.
Package lz4 implements reading and writing lz4 compressed data. The package supports both the LZ4 stream format, as specified in http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html, and the LZ4 block format, defined at http://fastcompression.blogspot.fr/2011/05/lz4-explained.html. See https://github.com/lz4/lz4 for the reference C implementation.
Package flac encapsulated the operations that extract and split FLAC metadata blocks from a FLAC stream file and assembles them back after modifications provided by other packages.
Package firehose provides the API client, operations, and parameter types for Amazon Kinesis Firehose. Amazon Data Firehose was previously known as Amazon Kinesis Data Firehose. Amazon Data Firehose is a fully managed service that delivers real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon OpenSearch Service, Amazon Redshift, Splunk, and various other supported destinations.
Package stan is a Go client for the NATS Streaming messaging system (https://nats.io).