Package rollbar is a Golang Rollbar client that makes it easy to report errors to Rollbar with full stacktraces. Basic Usage This package is designed to be used via the functions exposed at the root of the `rollbar` package. These work by managing a single instance of the `Client` type that is configurable via the setter functions at the root of the package. If you wish for more fine grained control over the client or you wish to have multiple independent clients then you can create and manage your own instances of the `Client` type. We provide two implementations of the `Transport` interface, `AsyncTransport` and `SyncTransport`. These manage the communication with the network layer. The Async version uses a buffered channel to communicate with the Rollbar API in a separate go routine. The Sync version is fully synchronous. It is possible to create your own `Transport` and configure a Client to use your preferred implementation. Go does not provide a mechanism for handling all panics automatically, therefore we provide two functions `Wrap` and `WrapAndWait` to make working with panics easier. They both take a function and then report to Rollbar if that function panics. They use the recover mechanism to capture the panic, and therefore if you wish your process to have the normal behaviour on panic (i.e. to crash), you will need to re-panic the result of calling `Wrap`. For example, The above pattern of calling `Wrap(...)` and then `Wait(...)` can be combined via `WrapAndWait(...)`. When `WrapAndWait(...)` returns if there was a panic it has already been sent to the Rollbar API. The error is still returned by this function if there is one. Due to the nature of the `error` type in Go, it can be difficult to attribute errors to their original origin without doing some extra work. To account for this, we define the interface `CauseStacker`: One can implement this interface for custom Error types to be able to build up a chain of stack traces. In order to get stack the correct stacks, callers must call BuildStack on their own at the time that the cause is wrapped. This is the least intrusive mechanism for gathering this information due to the decisions made by the Go runtime to not track this information.
Package paymentcryptography provides the API client, operations, and parameter types for Payment Cryptography Control Plane. Amazon Web Services Payment Cryptography Control Plane APIs manage encryption keys for use during payment-related cryptographic operations. You can create, import, export, share, manage, and delete keys. You can also manage Identity and Access Management (IAM) policies for keys. For more information, see Identity and access managementin the Amazon Web Services Payment Cryptography User Guide. To use encryption keys for payment-related transaction processing and associated cryptographic operations, you use the Amazon Web Services Payment Cryptography Data Plane. You can perform actions like encrypt, decrypt, generate, and verify payment-related data. All Amazon Web Services Payment Cryptography API calls must be signed and transmitted using Transport Layer Security (TLS). We recommend you always use the latest supported TLS version for logging API requests. Amazon Web Services Payment Cryptography supports CloudTrail for control plane operations, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket you specify. By using the information collected by CloudTrail, you can determine what requests were made to Amazon Web Services Payment Cryptography, who made the request, when it was made, and so on. If you don't configure a trail, you can still view the most recent events in the CloudTrail console. For more information, see the CloudTrail User Guide.
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.
Package gochrome aims to be a complete Chrome DevTools Protocol Viewer implementation. Versioned packages are available. Curently the only version is `tot` or Tip-of-Tree. Stable versions will be made available in the future. This is beta software and hasn't been well exercised in real-world applications. See https://chromedevtools.github.io/devtools-protocol/ The Chrome DevTools Protocol allows for tools to instrument, inspect, debug and profile Chromium, Chrome and other Blink-based browsers. Many existing projects currently use the protocol. The Chrome DevTools uses this protocol and the team maintains its API. Instrumentation is divided into a number of domains (DOM, Debugger, Network etc.). Each domain defines a number of commands it supports and events it generates. Both commands and events are serialized JSON objects of a fixed structure. You can either debug over the wire using the raw messages as they are described in the corresponding domain documentation, or use extension JavaScript API. The latest (tip-of-tree) protocol (tot) It changes frequently and can break at any time. However it captures the full capabilities of the Protocol, whereas the stable release is a subset. There is no backwards compatibility support guaranteed for the capabilities it introduces. Resources Basics: Using DevTools as protocol client The Developer Tools front-end can attach to a remotely running Chrome instance for debugging. For this scenario to work, you should start your host Chrome instance with the remote-debugging-port command line switch: Then you can start a separate client Chrome instance, using a distinct user profile: Now you can navigate to the given port from your client and attach to any of the discovered tabs for debugging: http://localhost:9222 You will find the Developer Tools interface identical to the embedded one and here is why: In this scenario, you can substitute Developer Tools front-end with your own implementation. Instead of navigating to the HTML page at http://localhost:9222, your application can discover available pages by requesting: http://localhost:9222/json and getting a JSON object with information about inspectable pages along with the WebSocket addresses that you could use in order to start instrumenting them. Remote debugging is especially useful when debugging remote instances of the browser or attaching to the embedded devices. Blink port owners are responsible for exposing debugging connections to the external users. This is especially handy to understand how the DevTools frontend makes use of the protocol. First, run Chrome with the debugging port open: Then, select the Chromium Projects item in the Inspectable Pages list. Now that DevTools is up and fullscreen, open DevTools to inspect it. Cmd-R in the new inspector to make the first restart. Now head to Network Panel, filter by Websocket, select the connection and click the Frames tab. Now you can easily see the frames of WebSocket activity as you use the first instance of the DevTools. To allow chrome extensions to interact with the protocol, we introduced chrome.debugger extension API that exposes this JSON message transport interface. As a result, you can not only attach to the remotely running Chrome instance, but also instrument it from its own extension. Chrome Debugger Extension API provides a higher level API where command domain, name and body are provided explicitly in the `sendCommand` call. This API hides request ids and handles binding of the request with its response, hence allowing `sendCommand` to report result in the callback function call. One can also use this API in combination with the other Extension APIs. If you are developing a Web-based IDE, you should implement an extension that exposes debugging capabilities to your page and your IDE will be able to open pages with the target application, set breakpoints there, evaluate expressions in console, live edit JavaScript and CSS, display live DOM, network interaction and any other aspect that Developer Tools is instrumenting today. Opening embedded Developer Tools will terminate the remote connection and thus detach the extension. https://chromedevtools.github.io/devtools-protocol/#simultaneous The canonical protocol definitions live in the Chromium source tree: (browser_protocol.json and js_protocol.json). They are maintained manually by the DevTools engineering team. These files are mirrored (hourly) on GitHub in the devtools-protocol repo. The declarative protocol definitions are used across tools. Within Chromium, a binding layer is created for the Chrome DevTools to interact with, and separately the protocol is used for Chrome Headless’s C++ interface. What’s the protocol_externs file It’s created via generate_protocol_externs.py and useful for tools using closure compiler. The TypeScript story is here. Not yet. See bugger-daemon’s third-party docs. See also the endpoints implementation in Chromium. /json/protocol was added in Chrome 60. The endpoint is exposed as webSocketDebuggerUrl in /json/version. Note the browser in the URL, rather than page. If Chrome was launched with --remote-debugging-port=0 and chose an open port, the browser endpoint is written to both stderr and the DevToolsActivePort file in browser profile folder. Yes, as of Chrome 63! See Multi-client remote debugging support. Upon disconnnection, the outgoing client will receive a detached event. For example: View the enum of possible reasons. (For reference: the original patch). After disconnection, some apps have chosen to pause their state and offer a reconnect button.
Package micro is a pluggable framework for microservices
Package hexutil implements hex encoding with 0x prefix. This encoding is used by the Ethereum RPC API to transport binary data in JSON payloads. All hex data must have prefix "0x". For byte slices, the hex data must be of even length. An empty byte slice encodes as "0x". Integers are encoded using the least amount of digits (no leading zero digits). Their encoding may be of uneven length. The number zero encodes as "0x0".
Package cloud is the root of the packages used to access Google Cloud Services. See https://pkg.go.dev/cloud.google.com/go#section-directories for a full list of sub-modules. All clients in sub-packages are configurable via client options. These options are described here: https://pkg.go.dev/google.golang.org/api/option. Endpoint configuration is used to specify the URL to which requests are sent. It is used for services that support or require regional endpoints, as well as for other use cases such as testing against fake servers. For example, the Vertex AI service recommends that you configure the endpoint to the location with the features you want that is closest to your physical location or the location of your users. There is no global endpoint for Vertex AI. See Vertex AI - Locations for more details. The following example demonstrates configuring a Vertex AI client with a regional endpoint: All of the clients support authentication via Google Application Default Credentials, or by providing a JSON key file for a Service Account. See examples below. Google Application Default Credentials (ADC) is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://cloud.google.com/docs/authentication/production. If you have your environment configured correctly you will not need to pass any extra information to the client libraries. Here is an example of a client using ADC to authenticate: You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.cloud.google.com/iam-admin/serviceaccounts. This example uses the Secret Manger client, but the same steps apply to the all other client libraries this package as well. Example: In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. This example uses the Secret Manager client, but the same steps apply to all other client libraries as well. Note that scopes can be found at https://developers.google.com/identity/protocols/oauth2/scopes, and are also provided in all auto-generated libraries: for example, cloud.google.com/go/secretmanager/apiv1 provides DefaultAuthScopes. Example: By default, non-streaming methods, like Create or Get, will have a default deadline applied to the context provided at call time, unless a context deadline is already set. Streaming methods have no default deadline and will run indefinitely. To set timeouts or arrange for cancellation, use context. Transient errors will be retried when correctness allows. Here is an example of setting a timeout for an RPC using context.WithTimeout: Here is an example of setting a timeout for an RPC using github.com/googleapis/gax-go/v2.WithTimeout: Here is an example of how to arrange for an RPC to be canceled, use context.WithCancel: Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Regardless of which transport is used, request headers can be set in the same way using [`callctx.SetHeaders`]setheaders. Here is a generic example: There are a some header keys that Google reserves for internal use that must not be ovewritten. The following header keys are broadly considered reserved and should not be conveyed by client library users unless instructed to do so: * `x-goog-api-client` * `x-goog-request-params` Be sure to check the individual package documentation for other service-specific reserved headers. For example, Storage supports a specific auditing header that is mentioned in that [module's documentation]storagedocs. Google Cloud services respect system parameterssystem parameters that can be used to augment request and/or response behavior. For the most part, they are not needed when using one of the enclosed client libraries. However, those that may be necessary are made available via the [`callctx`]callctx package. If not present there, consider opening an issue on that repo to request a new constant. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport by default. For gRPC clients, connection pooling is configurable. Users of Cloud Client Libraries may specify google.golang.org/api/option.WithGRPCConnectionPool as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and accessed in a round robin fashion. Minimal container images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/googleapis/google-cloud-go/issues/928 for more information. For tips on how to write tests against code that calls into our libraries check out our Debugging Guide. For tips on how to write tests against code that calls into our libraries check out our Testing Guide. Most of the errors returned by the generated clients are wrapped in an github.com/googleapis/gax-go/v2/apierror.APIError and can be further unwrapped into a google.golang.org/grpc/status.Status or google.golang.org/api/googleapi.Error depending on the transport used to make the call (gRPC or REST). Converting your errors to these types can be a useful way to get more information about what went wrong while debugging. APIError gives access to specific details in the error. The transport-specific errors can still be unwrapped using the APIError. Semver is used to communicate stability of the sub-modules of this package. Note, some stable sub-modules do contain packages, and sometimes features, that are considered unstable. If something is unstable it will be explicitly labeled as such. Example of package does in an unstable package: Clients that contain alpha and beta in their import path may change or go away without notice. Clients marked stable will maintain compatibility with future versions for as long as we can reasonably sustain. Incompatible changes might be made in some situations, including:
Package gomcp provides a Go implementation of the Model Context Protocol (MCP). The Model Context Protocol (MCP) is a standardized communication protocol designed to facilitate interaction between applications and Large Language Models (LLMs). This library provides a complete Go implementation of the protocol with support for all specification versions (2024-11-05, 2025-03-26, and draft) with automatic version detection and negotiation. Starting with v1.5.0, all public APIs are locked and stable. This library is ready for production use with the following guarantees: - Full MCP protocol implementation - Client and server components - Multiple transport options (stdio, HTTP, WebSocket, Server-Sent Events) - Automatic protocol version negotiation - Comprehensive type safety - Support for all MCP operations: tools, resources, prompts, and sampling - Flexible configuration options - Process management for external MCP servers - Server configuration file support The library is organized into the following main packages: ## Client Example ## Server Configuration Example ## Server Example GOMCP includes robust functionality for managing external MCP server processes: For more information, see the docs/examples/server-config.md documentation. This library implements the Model Context Protocol as defined at: https://github.com/microsoft/modelcontextprotocol For detailed documentation, examples, and specifications, see: https://modelcontextprotocol.github.io/ For more examples, see the examples directory in this repository. gomcp follows semantic versioning. Starting with v1.5.0, the APIs are locked and stable, making this library ready for production use. The current version is available through the Version constant.
Package shadow provides a PT 2.1 Go API wrapper around the connections used by Shadowsocks
Command goat provides an implementation of a BitTorrent tracker, written in Go. goat can be built using Go 1.1+. It can be downloaded, built, and installed, simply by running: In addition, goat depends on a MySQL server for data storage. After creating a database and user for goat, its database schema may be imported from the SQL files located in 'res/'. goat will not run unless MySQL is installed, and a database and user are properly configured for its use. Optionally, goat can be built to use ql (https://github.com/cznic/ql) as its storage backend. This is done by supplying the 'ql' tag in the go get command: A blank ql database file is located under 'res/ql/goat.db', and will be copied to '~/.config/goat/goat.db' on UNIX systems. goat is now able to use ql as its storage backend, for those who do not wish to use an external, MySQL backend. goat is capable of listening for torrent traffic in three modes: HTTP, HTTPS, and UDP. HTTP/HTTPS are the recommended methods, and are required in order for goat to serve its API, and to allow use of private tracker passkeys. HTTP is considered the standard mode of operation for goat. HTTP allows gathering a great number of metrics, use of passkeys, use of a client whitelist, and access to goat's RESTful API, when configured. For most trackers, this will be the only listener which is necessary in order for goat to function properly. The HTTPS listener provides a method to encrypt traffic to the tracker, but must be used with caution. Unless the SSL certificate in use is signed by a proper certificate authority, it will distress most clients, and they may outright refuse to announce to it. If you are in possession of a certificate signed by a certificate authority, this mode may be more ideal, as it provides added security for your clients. The UDP listener is the most unusual method of the three, and should only be used for public trackers. The BitTorrent UDP tracker protocol specifies a very specific packet format, meaning that additional information or parameters cannot be packed into a UDP datagram in a standard way. The UDP tracker may be the fastest and least bandwidth-intensive, but as stated, should only be used for public trackers. A new feature goat added to goat in order to allow better interoperability with many languages is a RESTful API, which is served using the HTTP or HTTPS listeners. This API enables easy retrieval of tracker statistics, while allowing goat to run as a completely independent process. It should be noted that the API is only enabled when configured, and when a HTTP or HTTPS listener is enabled. Without a transport mechanism, the API will be inaccessible. The API features several modes of authentication, including HTTP Basic for login and HMAC-SHA1 other calls. Upon logging into the API using HTTP Basic with a username and password pair, an API public key and secret will be generated. The public key is used as the username for HTTP Basic authentication, and the secret key is used to calculate a HMAC-SHA1 signature for the password. As part of API signature generation, a random nonce value must be generated and added to the request. It is added to the password portion of the HTTP Basic request, and also to the string which is used to create the signature. Nonce values must be changed on every request, or the request will fail. The current pseudocode format of the HMAC-SHA1 signature is as follows: The proper format for a HTTP Basic request is as follows: When the public key, nonce, and API signature are sent via HTTP Basic, the server will verify the signature. Successful authentication will allow access to the API. This list contains all API calls currently recognized by goat. Each call must be authenticated using the aforementioned methods. Request an API public key and secret key for this user. The public key, user ID, and secret key are used to authenticate further API calls. The expire time indicates when this key is set to expire. Further API calls will extend the expiration time. Retrieve a list of all files tracked by goat. Some extended attributes are not added to reduce strain on database, and to provide a more general overview. Retrieve extended attributes about a specific file with matching ID. This provides counts for number of completions, seeders, leechers, and a list of fileUser relationships associated with a given file. Retrieve a variety of metrics about the current status of goat, including its PID, hostname, memory usage, number of HTTP/UDP hits, etc. Create a user with the specified username, password, and torrent limit. Reterieve a list of all users registered to goat, including their ID, torrent limit, and username. Retrieve information about a single user with matching ID, including their ID, torrent limit, and username. goat is configured using a JSON file, which will be created under '~/.config/goat/config.json' on UNIX systems. Here is an example configuration, describing the settings available to the user.
Package pulseaudio is a pure-Go (no libpulse) implementation of the PulseAudio native protocol. Rather than exposing the PulseAudio protocol directly this library attempts to hide the PulseAudio complexity behind a Go interface. Some of the things which are deliberately not exposed in the API are: → backwards compatibility for old PulseAudio servers → transport mechanism used for the connection (Unix sockets / memfd / shm) → encoding used in the pulseaudio-native protocol Querying and setting the volume. Listing audio outputs. Changing the default audio output. Notifications on config updates.
Package httpexpect helps with end-to-end HTTP and REST API testing. See example directory: There are two common ways to test API with httpexpect: The second approach works only if the server is a Go module and its handler can be imported in tests. Concrete behaviour is determined by Client implementation passed to Config struct. If you're using http.Client, set its Transport field (http.RoundTriper) to one of the following: Note that http handler can be usually obtained from http framework you're using. E.g., echo framework provides either http.Handler. You can also provide your own implementation of RequestFactory (creates http.Request), or Client (gets http.Request and returns http.Response). If you're starting server from tests, it's very handy to use net/http/httptest. Whenever values are checked for equality in httpexpect, they are converted to "canonical form": This is equivalent to subsequently json.Marshal() and json.Unmarshal() the value and currently is implemented so. When some check fails, failure is reported. If non-fatal failures are used (see Reporter interface), execution is continued and instance that was checked is marked as failed. If specific instance is marked as failed, all subsequent checks are ignored for this instance and for any child instances retrieved after failure. Example: If you want to be informed about every asserion made, successful or failed, you can use AssertionHandler interface. Default implementation of this interface ignores successful assertions and reports failed assertions using Formatter and Reporter objects. Custom AssertionHandler can handle all assertions (e.g. dump them in JSON format) and is free to use or not to use Formatter and Reporter in its sole discretion.
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.
Package elasticsearch provides a Go client for Elasticsearch. Create the client with the NewDefaultClient function: The ELASTICSEARCH_URL environment variable is used instead of the default URL, when set. Use a comma to separate multiple URLs. To configure the client, pass a Config object to the NewClient function: When using the Elastic Service (https://elastic.co/cloud), you can use CloudID instead of Addresses. When either Addresses or CloudID is set, the ELASTICSEARCH_URL environment variable is ignored. See the elasticsearch_integration_test.go file and the _examples folder for more information. Call the Elasticsearch APIs by invoking the corresponding methods on the client: See the github.com/elastic/go-elasticsearch/esapi package for more information about using the API. See the github.com/elastic/elastic-transport-go package for more information about configuring the transport.
Package httpexpect helps with end-to-end HTTP and REST API testing. See example directory: There are two common ways to test API with httpexpect: The second approach works only if the server is a Go module and its handler can be imported in tests. Concrete behaviour is determined by Client implementation passed to Config struct. If you're using http.Client, set its Transport field (http.RoundTriper) to one of the following: Note that http handler can be usually obtained from http framework you're using. E.g., echo framework provides either http.Handler or fasthttp.RequestHandler. You can also provide your own implementation of RequestFactory (creates http.Request), or Client (gets http.Request and returns http.Response). If you're starting server from tests, it's very handy to use net/http/httptest. Whenever values are checked for equality in httpexpect, they are converted to "canonical form": This is equivalent to subsequently json.Marshal() and json.Unmarshal() the value and currently is implemented so. When some check fails, failure is reported. If non-fatal failures are used (see Reporter interface), execution is continued and instance that was checked is marked as failed. If specific instance is marked as failed, all subsequent checks are ignored for this instance and for any child instances retrieved after failure. Example:
muxado implements a general purpose stream-multiplexing protocol. muxado allows clients applications to multiplex any io.ReadWriteCloser (like a net.Conn) into multiple, independent full-duplex streams. muxado is a useful protocol for any two communicating processes. It is an excellent base protocol for implementing lightweight RPC. It eliminates the need for custom async/pipeling code from your peers in order to support multiple simultaneous inflight requests between peers. For the same reason, it also eliminates the need to build connection pools for your clients. It enables servers to initiate streams to clients without building any NAT traversal. muxado can also yield performance improvements (especially latency) for protocols that require rapidly opening many concurrent connections. Here's an example client which responds to simple JSON requests from a server. Maybe the client wants to make a request to the server instead of just responding. This is easy as well: muxado defines the following terms for clarity of the documentation: A "Transport" is an underlying stream (typically TCP) that is multiplexed by sending frames between muxado peers over this transport. A "Stream" is any of the full-duplex byte-streams multiplexed over the transport A "Session" is two peers running the muxado protocol over a single transport muxado's design is influenced heavily by the framing layer of HTTP2 and SPDY. However, instead of being specialized for a higher-level protocol, muxado is designed in a protocol agnostic way with simplicity and speed in mind. More advanced features are left to higher-level libraries and protocols. muxado's API is designed to make it seamless to integrate into existing Go programs. muxado.Session implements the net.Listener interface and muxado.Stream implements net.Conn. muxado ships with two wrappers that add commonly used functionality. The first is a TypedStreamSession which allows a client application to open streams with a type identifier so that the remote peer can identify the protocol that will be communicated on that stream. The second wrapper is a simple Heartbeat which issues a callback to the application informing it of round-trip latency and heartbeat failure.
Package api2 provides types and functions used to define interfaces of client-server API and facilitate creation of server and client for it. How to use this package. Organize your code in services. Each service provides some domain specific functionality. It is a Go type whose methods correspond to exposed RPC's of the API. Each method has the following signature: Let's define a service Foo with method Bar. A field must not have more than one of tags: json, query, header, cookie. Fields in query, header and cookie parts are encoded and decoded with fmt.Sprintf and fmt.Sscanf. Strings are not decoded with fmt.Sscanf, but passed as is. Types implementing encoding.TextMarshaler and encoding.TextUnmarshaler are encoded and decoded using it. Cookie in Response part must be of type http.Cookie. If no field is no JSON field in the struct, then HTTP body is skipped. You can also set HTTP status code of response by adding a field of type `int` with tag `use_as_status:"true"` to Response. 0 is interpreted as 200. If Response has status field, no HTTP statuses are considered errors. If you need the top-level type matching body JSON to be not a struct, but of some other kind (e.g. slice or map), you should provide a field in your struct with tag `use_as_body:"true"`: If you use `use_as_body:"true"`, you can also set `is_protobuf:"true"` and put a protobuf type (convertible to proto.Message) in that field. It will be sent over wire as protobuf binary form. You can add `use_as_body:"true" is_raw:"true"` to a `[]byte` field, then it will keep the whole HTTP body. Streaming. If you use `use_as_body:"true"`, you can also set `is_stream:"true"`. In this case the field must be of type `io.ReadCloser`. On the client side put any object implementing `io.ReadCloser` to such a field in Request. It will be read and closed by the library and used as HTTP request body. On the server side your handler should read from the reader passed in that field of Request. (You don't have to read the entire body and to close it.) For Response, on the server side, the handler must put any object implementing `io.ReadCloser` to such a field of Response. The library will use it to generate HTTP response's body and close it. On the client side your code must read from that reader the entire response and then close it. If a streaming field is left `nil`, it is interpreted as empty body. Now let's write the function that generates the table of routes: You can add multiple routes with the same path, but in this case their HTTP methods must be different so that they can be distinguished. If Transport is not set, DefaultTransport is used which is defined as &api2.JsonTransport{}. **Error handling**. A handler can return any Go error. `JsonTransport` by default returns JSON. `Error()` value is put into "error" field of that JSON. If the error has `HttpCode() int` method, it is called and the result is used as HTTP return code. You can pass error details (any struct). For that the error must be of a custom type. You should register the error type in `JsonTransport.Errors` map. The key used for that error is put into "code" key of JSON and the object of the registered type - into "detail" field. The error can be wrapped using `fmt.Errorf("%w" ...)`. See test/custom_error_test.go for an example. In the server you need a real instance of service Foo to pass to GetRoutes. Then just bind the routes to http.ServeMux and run the server: The server is running. It serves foo.Bar function on path /v1/foo/bar with HTTP method Post. Now let's create the client: The client sent request to path "/v1/foo/bar/product1", from which the server understood that product=product1. Note that you don't have to pass a real service object to GetRoutes on client side. You can pass nil, it is sufficient to pass all needed information about request and response types in the routes table, that is used by client to find a proper route. You can make GetRoutes accepting an interface instead of a concrete Service type. In this case you can not get method handlers by s.Bar, because this code panics if s is nil interface. As a workaround api2 provides function Method(service pointer, methodName) which you can use: If you have function GetRoutes in package foo as above you can generate static client for it in file client.go located near the file in which GetRoutes is defined: GenerateClient can accept multiple GetRoutes functions, but they must be located in the same package.
Package ssh implements an SSH client and server. SSH is a transport security protocol, an authentication protocol and a family of application protocols. The most typical application level protocol is a remote shell and this is specifically implemented. However, the multiplexed nature of SSH is exposed to users that wish to support others. References: This package does not fall under the stability promise of the Go language itself, so its API may be changed when pressing needs arise.
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package api2 provides types and functions used to define interfaces of client-server API and facilitate creation of server and client for it. How to use this package. Organize your code in services. Each service provides some domain specific functionality. It is a Go type whose methods correspond to exposed RPC's of the API. Each method has the following signature: Let's define a service Foo with method Bar. A field must not have more than one of tags: json, query, header, cookie. Fields in query, header and cookie parts are encoded and decoded with fmt.Sprintf and fmt.Sscanf. Strings are not decoded with fmt.Sscanf, but passed as is. Types implementing encoding.TextMarshaler and encoding.TextUnmarshaler are encoded and decoded using it. Cookie in Response part must be of type http.Cookie. If no field is no JSON field in the struct, then HTTP body is skipped. You can also set HTTP status code of response by adding a field of type `int` with tag `use_as_status:"true"` to Response. 0 is interpreted as 200. If Response has status field, no HTTP statuses are considered errors. If you need the top-level type matching body JSON to be not a struct, but of some other kind (e.g. slice or map), you should provide a field in your struct with tag `use_as_body:"true"`: If you use `use_as_body:"true"`, you can also set `is_protobuf:"true"` and put a protobuf type (convertible to proto.Message) in that field. It will be sent over wire as protobuf binary form. You can add `use_as_body:"true" is_raw:"true"` to a `[]byte` field, then it will keep the whole HTTP body. Streaming. If you use `use_as_body:"true"`, you can also set `is_stream:"true"`. In this case the field must be of type `io.ReadCloser`. On the client side put any object implementing `io.ReadCloser` to such a field in Request. It will be read and closed by the library and used as HTTP request body. On the server side your handler should read from the reader passed in that field of Request. (You don't have to read the entire body and to close it.) For Response, on the server side, the handler must put any object implementing `io.ReadCloser` to such a field of Response. The library will use it to generate HTTP response's body and close it. On the client side your code must read from that reader the entire response and then close it. If a streaming field is left `nil`, it is interpreted as empty body. Now let's write the function that generates the table of routes: You can add multiple routes with the same path, but in this case their HTTP methods must be different so that they can be distinguished. If Transport is not set, DefaultTransport is used which is defined as &api2.JsonTransport{}. **Error handling**. A handler can return any Go error. `JsonTransport` by default returns JSON. `Error()` value is put into "error" field of that JSON. If the error has `HttpCode() int` method, it is called and the result is used as HTTP return code. You can pass error details (any struct). For that the error must be of a custom type. You should register the error type in `JsonTransport.Errors` map. The key used for that error is put into "code" key of JSON and the object of the registered type - into "detail" field. The error can be wrapped using `fmt.Errorf("%w" ...)`. See test/custom_error_test.go for an example. In the server you need a real instance of service Foo to pass to GetRoutes. Then just bind the routes to http.ServeMux and run the server: The server is running. It serves foo.Bar function on path /v1/foo/bar with HTTP method Post. Now let's create the client: The client sent request to path "/v1/foo/bar/product1", from which the server understood that product=product1. Note that you don't have to pass a real service object to GetRoutes on client side. You can pass nil, it is sufficient to pass all needed information about request and response types in the routes table, that is used by client to find a proper route. You can make GetRoutes accepting an interface instead of a concrete Service type. In this case you can not get method handlers by s.Bar, because this code panics if s is nil interface. As a workaround api2 provides function Method(service pointer, methodName) which you can use: If you have function GetRoutes in package foo as above you can generate static client for it in file client.go located near the file in which GetRoutes is defined: GenerateClient can accept multiple GetRoutes functions, but they must be located in the same package.
Package dnscore provides a DNS resolver, a DNS transport, a query builder, and a DNS response parser. This package is designed to facilitate DNS measurements and queries by providing both high-level and low-level APIs. It aims to be flexible, extensible, and easy to integrate with existing Go code. The high-level *Resolver API provides a DNS resolver that is compatible with the *net.Resolver struct from the net package. The low-level *Transport API allows users to send and receive DNS messages using different protocols and dialers. The package also includes utilities for creating and validating DNS messages. - High-level *Resolver API compatible with *net.Resolver for easy integration. - Low-level *Transport API allowing granular control over DNS requests and responses. - Support for multiple DNS protocols, including UDP, TCP, DoT, DoH, and DoQ. - Utilities for creating and validating DNS messages. - Optional logging for structured diagnostic events through log/slog. - Handling of duplicate responses for DNS over UDP to measure censorship. The package is structured to allow users to compose their own workflows by providing building blocks for DNS queries and responses. It uses the widely-used github.com/miekg/dns library for DNS message parsing and serialization. The dd-000-dnscore.md document describes the design of this package. The df-000-dns.md document describes the data format generated by this package when using log/slog to emit structured diagnostic events.
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices