Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.
Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients. The middleware consists of three components. A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as a method on a type for a stateful Policy. Note that HTTP requests made via the same pipeline share the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to avoid race conditions. A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers, and/or query parameters, inject a failure, etc. Once the Policy has successfully completed its request work, it must call the Next() method on the *policy.Request instance in order to pass the request to the next Policy in the chain. When an HTTP response comes back, the Policy then gets a chance to process the response/error. The Policy instance can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response body, etc. Once the Policy has successfully completed its response work, it must return the *http.Response and error instances to its caller. Template for implementing a stateless Policy: Template for implementing a stateful Policy: The Transporter interface is responsible for sending the HTTP request and returning the corresponding HTTP response or error. The Transporter is invoked by the last Policy in the chain. The default Transporter implementation uses a shared http.Client from the standard library. The same stateful/stateless rules for Policy implementations apply to Transporter implementations. To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function. The specified Policy instances form a chain and are invoked in the order provided to NewPipeline followed by the Transporter. Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method. The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter instances. The response/error is then sent through the same chain of Policy instances in reverse order. For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with TransportA. The flow of Request and Response looks like the following: The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request. It also contains some internal state and provides various convenience methods. You create a Request instance by calling the runtime.NewRequest function: If the Request should contain a body, call the SetBody method. A seekable stream is required so that upon retry, the retry Policy instance can seek the stream back to the beginning before retrying the network request and re-uploading the body. Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted. This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as a means to resolve the ambiguity between a field to be excluded and its zero-value. In the above example, Name and Count are defined as pointer-to-type to disambiguate between a missing value (nil) and a zero-value (0) which might have semantic differences. In a PATCH operation, any fields left as nil are to have their values preserved. When updating a Widget's count, one simply specifies the new value for Count, leaving Name nil. To fulfill the requirement for sending a JSON null, the NullValue() function can be used. This sends an explict "null" for Count, indicating that any current value for Count should be deleted. When the HTTP response is received, the *http.Response is returned directly. Each Policy instance can inspect/mutate the *http.Response. To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program. By default the logger writes to stderr. This can be customized by calling log.SetListener, providing a callback that writes to the desired location. Any custom logging implementation MUST provide its own synchronization to handle concurrent invocations. See the docs for the log package for further details. Pageable operations return potentially large data sets spread over multiple GET requests. The result of each GET is a "page" of data consisting of a slice of items. Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T]. The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages and determining if there are more pages to fetch. No IO calls are made until the NextPage() method is invoked. Long-running operations (LROs) are operations consisting of an initial request to start the operation followed by polling to determine when the operation has reached a terminal state. An LRO's terminal state is one of the following values. LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T]. When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started. It does _not_ mean that the widget has been created or updated (or failed to be created/updated). The *runtime.Poller[T] provides APIs for determining the state of the LRO. To wait for the LRO to complete, call the PollUntilDone() method. The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the context is canceled/timed out. Note that LROs can take anywhere from several seconds to several minutes. The duration is operation-dependent. Due to this variant behavior, pollers do _not_ have a preconfigured time-out. Use a context with the appropriate cancellation mechanism as required. Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to recreate the poller. This is achieved via the runtime.Poller[T].ResumeToken() method. Note that a token can only be obtained for a poller that's in a non-terminal state. Also note that any subsequent calls to poller.Poll() might change the poller's state. In this case, a new token should be created. After the token has been obtained, it can be used to recreate an instance of the originating poller. When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken. Resume tokens are unique per service client and operation. Attempting to resume a poller for LRO BeginB() with a token from LRO BeginA() will result in an error. The fake package contains types used for constructing in-memory fake servers used in unit tests. This allows writing tests to cover various success/error conditions without the need for connecting to a live service. Please see https://github.com/gracewilcox/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes.
Package elasticsearch provides a Go client for Elasticsearch. Create the client with the NewDefaultClient function: The ELASTICSEARCH_URL environment variable is used instead of the default URL, when set. Use a comma to separate multiple URLs. To configure the client, pass a Config object to the NewClient function: When using the Elastic Service (https://elastic.co/cloud), you can use CloudID instead of Addresses. When either Addresses or CloudID is set, the ELASTICSEARCH_URL environment variable is ignored. See the elasticsearch_integration_test.go file and the _examples folder for more information. Call the Elasticsearch APIs by invoking the corresponding methods on the client: See the github.com/corneliusdavid97/go-elasticsearch/esapi package for more information about using the API. See the github.com/elastic/elastic-transport-go package for more information about configuring the transport.
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorKind field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Catalyst started out as a microservice base that can be used to create REST APIs. It contains many essential parts that you would need for a microservice such as, - Configurability - A basic dependency injection mechanism - Request response cycle handling - Structure and field validations - Error handling - Logging - Database resource management - Application metrics Written using the Clean Architecture paradigm it offers clean separation between business (domain) logic and facilitation logic. In the context of `Catalyst` we use a concept called `Transport mediums` to define ways in which you can communicate with the microservice. A package inside the `transport` directory consists of all the logic needed to handle communication with the outside world using one type of transport medium. Out of the box, Catalyst contain two such transport mediums. - http (to handle REST web requests) - metrics (to expose application metrics) What makes Catalyst a REST API is this `http` package which handles the complete lifecycle of REST web requests. Likewise the `metrics` transport medium exposes an endpoint to let `Prometheus` scrape application metrics. You can add other transport mediums to leverage a project based on Catalyst. For an example a `stream` package can be added to communicate with a streaming platform like `Kafka`. Or an `mqtt` package can be added to communicate with `IoT` devices.
Package apns provide a Apple Push Notification service provider. Apple Push Notification service (APNs) is the centerpiece of the remote notifications feature. It is a robust and highly efficient service for propagating information to iOS (and, indirectly, watchOS), tvOS, and macOS devices. On initial activation, a device establishes an accredited and encrypted IP connection with APNs and receives notifications over this persistent connection. If a notification for an app arrives when that app is not running, the device alerts the user that the app has data waiting for it. You provide your own server to generate remote notifications for the users of your app. This server, known as the provider, has three main responsibilities. It: For each notification, the provider: On receiving the HTTP/2 request, APNs delivers the notification payload to your app on the user's device. Apple Push Notification service includes a default Quality of Service (QoS) component that performs a store-and-forward function. If APNs attempts to deliver a notification but the destination device is offline, APNs stores the notification for a limited period of time and delivers it to the device when the device becomes available. This mechanism stores only one recent notification per device, per app: if you send multiple notifications while a device is offline, a new notification causes the previous notification to be discarded. If a device remains offline for a long time, all notifications that were being stored for it are discarded; when the device goes back online, none of the notifications are displayed. When a device is online, all the notifications you send are delivered and available to the user. However, you can avoid showing duplicate notifications by employing a collapse identifier across multiple, identical notifications. The APNs request header key for the collapse identifier is "apns-collapse-id". For example, a news service that sends the same headline twice in a row could employ the same collapse identifier for both push notification requests. APNs would then take care of coalescing these requests into a single notification for delivery to a device. To ensure secure communication, APNs servers employ connection certificates, certification authority (CA) certificates, and cryptographic keys (private and public) to validate connections to, and identities of, providers and devices. APNs regulates the entry points between providers and devices using two levels of trust: connection trust and device token trust. Connection trust establishes certainty that APNs is connected to an authorized provider, owned by a company that Apple has agreed to deliver notifications for. You must take steps to ensure connection trust exists between your provider servers and APNs. APNs also uses connection trust with each device to ensure the legitimacy of the device. Connection trust with the device is handled automatically by APNs. Device token trust ensures that notifications are routed only between legitimate start and end points. A device token is an opaque, unique identifier assigned to a specific app on a specific device. Each app instance receives its unique token when it registers with APNs. The app must share this token with its provider, to allow the provider to employ the token when communicating with APNs. Each notification that your provider sends to APNs must include the device token, which ensures that the notification is delivered only to the app-device combination for which it is intended. Important: To protect user privacy, do not attempt to use a device token to identify a device. Device tokens can change after updating the operating system, and always change when a device's data and settings are erased. Whenever the system delivers a device token to an instance of your app, the app must forward it to your provider servers to allow further push notifications to the device. A provider using the HTTP/2-based APNs Provider API can use JSON web tokens (JWT) to validate the provider's connection with APNs. In this scheme, the provider does not require a certificate-plus-private key to establish connection. Instead, you provision a public key to be retained by Apple, and a private key which you retain and protect. Your providers then use your private key to generate and sign JWT authentication tokens. Each of your push requests must include an authentication token. Important: To establish TLS sessions with APNs, you must ensure that a GeoTrust Global CA root certificate is installed on each of your providers. You can download this certificate from the GeoTrust Root Certificates website: https://www.geotrust.com/resources/root-certificates/. The HTTP/2-based provider connection is valid for delivery to one specific app, identified by the topic (the app bundle ID) specified in the certificate. Depending on how you configure and provision your APNs Transport Layer Security (TLS) certificate, the trusted connection can also be valid for delivery of remote notifications to other items associated with your app, including Apple Watch complications and voice-over-Internet Protocol (VoIP) services. APNs delivers these notifications even when those items are running in the background. APNs maintains a certificate revocation list; if a provider's certificate is on the revocation list, APNs can revoke provider trust (that is, APNs can refuse the TLS initiation connection).
Package yext provides bindings for Yext Location Cloud APIs. For full documentation visit http://developer.yext.com/docs/api-reference/ Create an authenticated client (requires an API key) List all locations Fetch a single location Create a new location (see full documentation for required fields) Edit an existing location The behavior of the API client can be controlled with a Config instance. The Config type exposes chainable utility methods to make construction simpler. By default, clients will retry API requests up to 3 times in the case of non-4xx errors including HTTP transport, 5xx responses, etc. This can be modified via Config: In order to support partial object updates, many of the struct attributes are represented as pointers in order to differentiate between "not-present" and "zero-valued". Helpers are provided to make it easier to work with the pointers: In addition, accessors are provided to make extracting data from the model objects simpler: Errors returned from the API are surfaced as Errors objects. The Errors object is comprised of a list of errors, each with a Message, Code, and Type. A full list of expected errors can be found here: http://developer.yext.com/support/error-messages/ Most functions that interact with the API will return at least two parameters - a yext.Response object and an error. Response contains a Meta substructure that in turn has a UUID and Errors attribute. The UUID can be used to look up individual requests in the developer.yext.com portal, useful for debugging requests. Most of the functionality within the API is exposed via domain-specific services available under the `Client` object. For example, if you are interacting with Locations, use the Client instance's LocationService. If you need to interact with Users, you'd use the UserService. Each service provides a set of common data-access functions that you can use to interact with objects under the service's domain. Where appropriate, services will expose additional, domain-specific functionality.
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.