Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.
Package elasticsearch provides a Go client for Elasticsearch. Create the client with the NewDefaultClient function: The ELASTICSEARCH_URL environment variable is used instead of the default URL, when set. Use a comma to separate multiple URLs. To configure the client, pass a Config object to the NewClient function: When using the Elastic Service (https://elastic.co/cloud), you can use CloudID instead of Addresses. When either Addresses or CloudID is set, the ELASTICSEARCH_URL environment variable is ignored. See the elasticsearch_integration_test.go file and the _examples folder for more information. Call the Elasticsearch APIs by invoking the corresponding methods on the client: See the github.com/corneliusdavid97/go-elasticsearch/esapi package for more information about using the API. See the github.com/elastic/elastic-transport-go package for more information about configuring the transport.
Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients. The middleware consists of three components. A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as a method on a type for a stateful Policy. Note that HTTP requests made via the same pipeline share the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to avoid race conditions. A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers, and/or query parameters, inject a failure, etc. Once the Policy has successfully completed its request work, it must call the Next() method on the *policy.Request instance in order to pass the request to the next Policy in the chain. When an HTTP response comes back, the Policy then gets a chance to process the response/error. The Policy instance can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response body, etc. Once the Policy has successfully completed its response work, it must return the *http.Response and error instances to its caller. Template for implementing a stateless Policy: Template for implementing a stateful Policy: The Transporter interface is responsible for sending the HTTP request and returning the corresponding HTTP response or error. The Transporter is invoked by the last Policy in the chain. The default Transporter implementation uses a shared http.Client from the standard library. The same stateful/stateless rules for Policy implementations apply to Transporter implementations. To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function. The specified Policy instances form a chain and are invoked in the order provided to NewPipeline followed by the Transporter. Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method. The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter instances. The response/error is then sent through the same chain of Policy instances in reverse order. For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with TransportA. The flow of Request and Response looks like the following: The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request. It also contains some internal state and provides various convenience methods. You create a Request instance by calling the runtime.NewRequest function: If the Request should contain a body, call the SetBody method. A seekable stream is required so that upon retry, the retry Policy instance can seek the stream back to the beginning before retrying the network request and re-uploading the body. Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted. This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as a means to resolve the ambiguity between a field to be excluded and its zero-value. In the above example, Name and Count are defined as pointer-to-type to disambiguate between a missing value (nil) and a zero-value (0) which might have semantic differences. In a PATCH operation, any fields left as nil are to have their values preserved. When updating a Widget's count, one simply specifies the new value for Count, leaving Name nil. To fulfill the requirement for sending a JSON null, the NullValue() function can be used. This sends an explict "null" for Count, indicating that any current value for Count should be deleted. When the HTTP response is received, the *http.Response is returned directly. Each Policy instance can inspect/mutate the *http.Response. To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program. By default the logger writes to stderr. This can be customized by calling log.SetListener, providing a callback that writes to the desired location. Any custom logging implementation MUST provide its own synchronization to handle concurrent invocations. See the docs for the log package for further details. Pageable operations return potentially large data sets spread over multiple GET requests. The result of each GET is a "page" of data consisting of a slice of items. Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T]. The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages and determining if there are more pages to fetch. No IO calls are made until the NextPage() method is invoked. Long-running operations (LROs) are operations consisting of an initial request to start the operation followed by polling to determine when the operation has reached a terminal state. An LRO's terminal state is one of the following values. LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T]. When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started. It does _not_ mean that the widget has been created or updated (or failed to be created/updated). The *runtime.Poller[T] provides APIs for determining the state of the LRO. To wait for the LRO to complete, call the PollUntilDone() method. The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the context is canceled/timed out. Note that LROs can take anywhere from several seconds to several minutes. The duration is operation-dependent. Due to this variant behavior, pollers do _not_ have a preconfigured time-out. Use a context with the appropriate cancellation mechanism as required. Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to recreate the poller. This is achieved via the runtime.Poller[T].ResumeToken() method. Note that a token can only be obtained for a poller that's in a non-terminal state. Also note that any subsequent calls to poller.Poll() might change the poller's state. In this case, a new token should be created. After the token has been obtained, it can be used to recreate an instance of the originating poller. When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken. Resume tokens are unique per service client and operation. Attempting to resume a poller for LRO BeginB() with a token from LRO BeginA() will result in an error. The fake package contains types used for constructing in-memory fake servers used in unit tests. This allows writing tests to cover various success/error conditions without the need for connecting to a live service. Please see https://github.com/gracewilcox/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes.
Package gochrome aims to be a complete Chrome DevTools Protocol Viewer implementation. Versioned packages are available. Curently the only version is `tot` or Tip-of-Tree. Stable versions will be made available in the future. This is beta software and hasn't been well exercised in real-world applications. See https://chromedevtools.github.io/devtools-protocol/ The Chrome DevTools Protocol allows for tools to instrument, inspect, debug and profile Chromium, Chrome and other Blink-based browsers. Many existing projects currently use the protocol. The Chrome DevTools uses this protocol and the team maintains its API. Instrumentation is divided into a number of domains (DOM, Debugger, Network etc.). Each domain defines a number of commands it supports and events it generates. Both commands and events are serialized JSON objects of a fixed structure. You can either debug over the wire using the raw messages as they are described in the corresponding domain documentation, or use extension JavaScript API. The latest (tip-of-tree) protocol (tot) It changes frequently and can break at any time. However it captures the full capabilities of the Protocol, whereas the stable release is a subset. There is no backwards compatibility support guaranteed for the capabilities it introduces. Resources Basics: Using DevTools as protocol client The Developer Tools front-end can attach to a remotely running Chrome instance for debugging. For this scenario to work, you should start your host Chrome instance with the remote-debugging-port command line switch: Then you can start a separate client Chrome instance, using a distinct user profile: Now you can navigate to the given port from your client and attach to any of the discovered tabs for debugging: http://localhost:9222 You will find the Developer Tools interface identical to the embedded one and here is why: In this scenario, you can substitute Developer Tools front-end with your own implementation. Instead of navigating to the HTML page at http://localhost:9222, your application can discover available pages by requesting: http://localhost:9222/json and getting a JSON object with information about inspectable pages along with the WebSocket addresses that you could use in order to start instrumenting them. Remote debugging is especially useful when debugging remote instances of the browser or attaching to the embedded devices. Blink port owners are responsible for exposing debugging connections to the external users. This is especially handy to understand how the DevTools frontend makes use of the protocol. First, run Chrome with the debugging port open: Then, select the Chromium Projects item in the Inspectable Pages list. Now that DevTools is up and fullscreen, open DevTools to inspect it. Cmd-R in the new inspector to make the first restart. Now head to Network Panel, filter by Websocket, select the connection and click the Frames tab. Now you can easily see the frames of WebSocket activity as you use the first instance of the DevTools. To allow chrome extensions to interact with the protocol, we introduced chrome.debugger extension API that exposes this JSON message transport interface. As a result, you can not only attach to the remotely running Chrome instance, but also instrument it from its own extension. Chrome Debugger Extension API provides a higher level API where command domain, name and body are provided explicitly in the `sendCommand` call. This API hides request ids and handles binding of the request with its response, hence allowing `sendCommand` to report result in the callback function call. One can also use this API in combination with the other Extension APIs. If you are developing a Web-based IDE, you should implement an extension that exposes debugging capabilities to your page and your IDE will be able to open pages with the target application, set breakpoints there, evaluate expressions in console, live edit JavaScript and CSS, display live DOM, network interaction and any other aspect that Developer Tools is instrumenting today. Opening embedded Developer Tools will terminate the remote connection and thus detach the extension. https://chromedevtools.github.io/devtools-protocol/#simultaneous The canonical protocol definitions live in the Chromium source tree: (browser_protocol.json and js_protocol.json). They are maintained manually by the DevTools engineering team. These files are mirrored (hourly) on GitHub in the devtools-protocol repo. The declarative protocol definitions are used across tools. Within Chromium, a binding layer is created for the Chrome DevTools to interact with, and separately the protocol is used for Chrome Headless’s C++ interface. What’s the protocol_externs file It’s created via generate_protocol_externs.py and useful for tools using closure compiler. The TypeScript story is here. Not yet. See bugger-daemon’s third-party docs. See also the endpoints implementation in Chromium. /json/protocol was added in Chrome 60. The endpoint is exposed as webSocketDebuggerUrl in /json/version. Note the browser in the URL, rather than page. If Chrome was launched with --remote-debugging-port=0 and chose an open port, the browser endpoint is written to both stderr and the DevToolsActivePort file in browser profile folder. Yes, as of Chrome 63! See Multi-client remote debugging support. Upon disconnnection, the outgoing client will receive a detached event. For example: View the enum of possible reasons. (For reference: the original patch). After disconnection, some apps have chosen to pause their state and offer a reconnect button.
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorKind field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Catalyst started out as a microservice base that can be used to create REST APIs. It contains many essential parts that you would need for a microservice such as, - Configurability - A basic dependency injection mechanism - Request response cycle handling - Structure and field validations - Error handling - Logging - Database resource management - Application metrics Written using the Clean Architecture paradigm it offers clean separation between business (domain) logic and facilitation logic. In the context of `Catalyst` we use a concept called `Transport mediums` to define ways in which you can communicate with the microservice. A package inside the `transport` directory consists of all the logic needed to handle communication with the outside world using one type of transport medium. Out of the box, Catalyst contain two such transport mediums. - http (to handle REST web requests) - metrics (to expose application metrics) What makes Catalyst a REST API is this `http` package which handles the complete lifecycle of REST web requests. Likewise the `metrics` transport medium exposes an endpoint to let `Prometheus` scrape application metrics. You can add other transport mediums to leverage a project based on Catalyst. For an example a `stream` package can be added to communicate with a streaming platform like `Kafka`. Or an `mqtt` package can be added to communicate with `IoT` devices.