Package fbmsgr provides an API for interacting with Facebook Messenger. The first step is to create a new Messenger session. Do this as follows, replacing "USER" and "PASS" with your Facebook login credentials: Once you are done with a session you have allocated, you should call Close() on it to clear any resources (e.g. goroutines) that it is using. When sending a message, you specify a receiver by their FBID. The receiver may be another user, or it may be a group. For most methods related to message sending, there is one version of the method for a user and one for a group: To send or retract a typing notification, you might do: To send an attachment such as an image or a video, you can do the following: It is easy to receive events such as incoming messages using the ReadEvent method: With the EventStream API, you can get more fine-grained control over how you receive events. For example, you can read the next minute's worth of events like so: You can also create multiple EventStreams and read from different streams in different places. To list the threads (conversations) a user is in, you can use the Threads method to fetch a subset of threads at a time. You can also use the AllThreads method to fetch all the threads at once.
Code generated by the Paddle SDK Generator; DO NOT EDIT. Package paddle provides the official SDK for using the Paddle Billing API. Demonstrates how to get an existing entity. Demonstrates how to create a new entity. Demonstrates how to get an existing entity. Demonstrates how to fetch a list and iterate over the provided results. Demonstrates how to fetch a list of events and iterate over the provided results. Demonstrates how to fetch a list and iterate over the provided results, including the automatic pagination. Demonstrates how to create a Simulation with Payload and read the Payload back out of the response Demonstrates how to create a scenario Simulation Demonstrates how to get a Simulation with Payload and read the Payload back out of the response // Demonstrates how to list Simulations with Payload and read the Payload back out of the response Demonstrates how to run a Simulation Demonstrates how to get a SimulationRun with included SimulationRunEvents Demonstrates how to update a Simulation with Payload and read the Payload back out of the response Demonstrates how to update an existing entity. Demonstrates how to unmarshal webhooks to their notification type
Package perf provides access to the Linux perf API. A Group represents a set of perf events measured together. Attr configures an individual event. Overflow records are available once the MapRing method on Event is called: Tracepoints are also supported: For more detailed information, see the examples, and man 2 perf_event_open. NOTE: this package is experimental and does not yet offer compatibility guarantees.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: By default, PasswordAuthenticator will attempt to authenticate regardless of what implementation the server returns in its AUTHENTICATE message as its authenticator, (e.g. org.apache.cassandra.auth.PasswordAuthenticator). If you wish to restrict this you may use PasswordAuthenticator.AllowedAuthenticators: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.Batch to create a new batch and then fill-in details of individual queries. Then execute the batch with Batch.Exec. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Batch.Exec prepares individual statements in the batch. If you have variable-length batches using the same statement, using Batch.Exec is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Batch.ExecCAS and Batch.MapExecCAS when executing the batch to learn about the result of the LWT. See example for Batch.MapExecCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package managedblockchainquery provides the API client, operations, and parameter types for Amazon Managed Blockchain Query. Amazon Managed Blockchain (AMB) Query provides you with convenient access to multi-blockchain network data, which makes it easier for you to extract contextual data related to blockchain activity. You can use AMB Query to read data from public blockchain networks, such as Bitcoin Mainnet and Ethereum Mainnet. You can also get information such as the current and historical balances of addresses, or you can get a list of blockchain transactions for a given time period. Additionally, you can get details of a given transaction, such as transaction events, which you can further analyze or use in business logic for your applications.
Package paymentcryptography provides the API client, operations, and parameter types for Payment Cryptography Control Plane. Amazon Web Services Payment Cryptography Control Plane APIs manage encryption keys for use during payment-related cryptographic operations. You can create, import, export, share, manage, and delete keys. You can also manage Identity and Access Management (IAM) policies for keys. For more information, see Identity and access managementin the Amazon Web Services Payment Cryptography User Guide. To use encryption keys for payment-related transaction processing and associated cryptographic operations, you use the Amazon Web Services Payment Cryptography Data Plane. You can perform actions like encrypt, decrypt, generate, and verify payment-related data. All Amazon Web Services Payment Cryptography API calls must be signed and transmitted using Transport Layer Security (TLS). We recommend you always use the latest supported TLS version for logging API requests. Amazon Web Services Payment Cryptography supports CloudTrail for control plane operations, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket you specify. By using the information collected by CloudTrail, you can determine what requests were made to Amazon Web Services Payment Cryptography, who made the request, when it was made, and so on. If you don't configure a trail, you can still view the most recent events in the CloudTrail console. For more information, see the CloudTrail User Guide.
Package notify implements access to filesystem events. Notify is a high-level abstraction over filesystem watchers like inotify, kqueue, FSEvents, FEN or ReadDirectoryChangesW. Watcher implementations are split into two groups: ones that natively support recursive notifications (FSEvents and ReadDirectoryChangesW) and ones that do not (inotify, kqueue, FEN). For more details see watcher and recursiveWatcher interfaces in watcher.go source file. On top of filesystem watchers notify maintains a watchpoint tree, which provides a strategy for creating and closing filesystem watches and dispatching filesystem events to user channels. An event set is just an event list joint using bitwise OR operator into a single event value. Both the platform-independent (see Constants) and specific events can be used. Refer to the event_*.go source files for information about the available events. A filesystem watch or just a watch is platform-specific entity which represents a single path registered for notifications for specific event set. Setting a watch means using platform-specific API calls for creating / initializing said watch. For each watcher the API call is: To rewatch means to either shrink or expand an event set that was previously registered during watch operation for particular filesystem watch. A watchpoint is a list of user channel and event set pairs for particular path (watchpoint tree's node). A single watchpoint can contain multiple different user channels registered to listen for one or more events. A single user channel can be registered in one or more watchpoints, recursive and non-recursive ones as well.
Package goarchtest provides a fluent API for testing architectural constraints in Go applications. GoArchTest helps enforce architectural boundaries that Go's compiler cannot check. While Go prevents circular imports, it allows architectural violations like inner layers depending on outer layers. GoArchTest bridges this gap by providing executable architectural tests. Add architectural tests to your Go test files: GoArchTest supports several predefined architecture patterns: The predicate system allows flexible filtering and testing of types: ## Type Filters ## Dependency Analysis ## Logical Operators Create custom rules for specific architectural constraints: Generate reports and visualizations of your architecture: ## Clean Architecture Validation ## Service Layer Validation ## Naming Convention Validation Include architectural tests in your continuous integration: GoArchTest is perfect for: While Go's compiler prevents circular imports (A→B→A), it allows architectural violations: GoArchTest bridges the gap between what Go's compiler enforces and what good software architecture requires. For more examples and detailed documentation, visit: https://github.com/solrac97gr/goarchtest Package goarchtest provides a fluent API for testing architectural constraints in Go applications. GoArchTest helps enforce architectural boundaries that Go's compiler cannot check. While Go prevents circular imports, it allows architectural violations like inner layers depending on outer layers. GoArchTest bridges this gap by providing executable architectural tests. Add architectural tests to your Go test files: GoArchTest supports predefined architecture patterns: Create custom predicates for specific architectural rules: - Clean Architecture - Hexagonal Architecture - Layered Architecture - MVC Architecture - DDD with Clean Architecture - CQRS Architecture - Event Sourced CQRS Architecture For more examples and documentation, visit: https://github.com/solrac97gr/goarchtest
Package mcwss implements a websocket server to be used with Minecraft Bedrock Edition. It aims to implement most of the protocol and abstraction around it to achieve and easy to use API for interacting with the client. When using mcwss.NewServer(nil), the default configuration is used. Once the server is ran, (Server.Run()) your client may connect to it using `/connect localhost:8000/ws'. The client will remain connected to the websocket until it either connects to a new one or closes the entire game, or when the websocket server chooses to forcibly close the connection. After connecting to a websocket server in a world that has cheats enabled, it is possible to join (online) games, like Mineplex, or other single player worlds, that do not have cheats enabled. Executing commands or other actions that would modify the world are not available on those servers, but events will still be sent by the client as expected.
Package eventbus provides a flexible event-driven messaging system for the modular framework. This module enables decoupled communication between application components through an event bus pattern. It supports both synchronous and asynchronous event processing, multiple event bus engines, and configurable event handling strategies. The eventbus module offers the following capabilities: The module can be configured through the EventBusConfig structure: The module registers itself as a service for dependency injection: Basic event publishing: Event subscription patterns: Subscription management: The module supports different event processing patterns: **Synchronous Processing**: Events are processed immediately in the same goroutine that published them. Best for lightweight operations and when ordering is important. **Asynchronous Processing**: Events are queued and processed by worker goroutines. Best for heavy operations, external API calls, or when you don't want to block the publisher. Currently supported engines:
Package gochrome aims to be a complete Chrome DevTools Protocol Viewer implementation. Versioned packages are available. Curently the only version is `tot` or Tip-of-Tree. Stable versions will be made available in the future. This is beta software and hasn't been well exercised in real-world applications. See https://chromedevtools.github.io/devtools-protocol/ The Chrome DevTools Protocol allows for tools to instrument, inspect, debug and profile Chromium, Chrome and other Blink-based browsers. Many existing projects currently use the protocol. The Chrome DevTools uses this protocol and the team maintains its API. Instrumentation is divided into a number of domains (DOM, Debugger, Network etc.). Each domain defines a number of commands it supports and events it generates. Both commands and events are serialized JSON objects of a fixed structure. You can either debug over the wire using the raw messages as they are described in the corresponding domain documentation, or use extension JavaScript API. The latest (tip-of-tree) protocol (tot) It changes frequently and can break at any time. However it captures the full capabilities of the Protocol, whereas the stable release is a subset. There is no backwards compatibility support guaranteed for the capabilities it introduces. Resources Basics: Using DevTools as protocol client The Developer Tools front-end can attach to a remotely running Chrome instance for debugging. For this scenario to work, you should start your host Chrome instance with the remote-debugging-port command line switch: Then you can start a separate client Chrome instance, using a distinct user profile: Now you can navigate to the given port from your client and attach to any of the discovered tabs for debugging: http://localhost:9222 You will find the Developer Tools interface identical to the embedded one and here is why: In this scenario, you can substitute Developer Tools front-end with your own implementation. Instead of navigating to the HTML page at http://localhost:9222, your application can discover available pages by requesting: http://localhost:9222/json and getting a JSON object with information about inspectable pages along with the WebSocket addresses that you could use in order to start instrumenting them. Remote debugging is especially useful when debugging remote instances of the browser or attaching to the embedded devices. Blink port owners are responsible for exposing debugging connections to the external users. This is especially handy to understand how the DevTools frontend makes use of the protocol. First, run Chrome with the debugging port open: Then, select the Chromium Projects item in the Inspectable Pages list. Now that DevTools is up and fullscreen, open DevTools to inspect it. Cmd-R in the new inspector to make the first restart. Now head to Network Panel, filter by Websocket, select the connection and click the Frames tab. Now you can easily see the frames of WebSocket activity as you use the first instance of the DevTools. To allow chrome extensions to interact with the protocol, we introduced chrome.debugger extension API that exposes this JSON message transport interface. As a result, you can not only attach to the remotely running Chrome instance, but also instrument it from its own extension. Chrome Debugger Extension API provides a higher level API where command domain, name and body are provided explicitly in the `sendCommand` call. This API hides request ids and handles binding of the request with its response, hence allowing `sendCommand` to report result in the callback function call. One can also use this API in combination with the other Extension APIs. If you are developing a Web-based IDE, you should implement an extension that exposes debugging capabilities to your page and your IDE will be able to open pages with the target application, set breakpoints there, evaluate expressions in console, live edit JavaScript and CSS, display live DOM, network interaction and any other aspect that Developer Tools is instrumenting today. Opening embedded Developer Tools will terminate the remote connection and thus detach the extension. https://chromedevtools.github.io/devtools-protocol/#simultaneous The canonical protocol definitions live in the Chromium source tree: (browser_protocol.json and js_protocol.json). They are maintained manually by the DevTools engineering team. These files are mirrored (hourly) on GitHub in the devtools-protocol repo. The declarative protocol definitions are used across tools. Within Chromium, a binding layer is created for the Chrome DevTools to interact with, and separately the protocol is used for Chrome Headless’s C++ interface. What’s the protocol_externs file It’s created via generate_protocol_externs.py and useful for tools using closure compiler. The TypeScript story is here. Not yet. See bugger-daemon’s third-party docs. See also the endpoints implementation in Chromium. /json/protocol was added in Chrome 60. The endpoint is exposed as webSocketDebuggerUrl in /json/version. Note the browser in the URL, rather than page. If Chrome was launched with --remote-debugging-port=0 and chose an open port, the browser endpoint is written to both stderr and the DevToolsActivePort file in browser profile folder. Yes, as of Chrome 63! See Multi-client remote debugging support. Upon disconnnection, the outgoing client will receive a detached event. For example: View the enum of possible reasons. (For reference: the original patch). After disconnection, some apps have chosen to pause their state and offer a reconnect button.
Package xmatters provides a Go client for interacting with the XMatters REST API. xMatters is a communication platform that enables enterprises to manage and automate communication with their employees, customers, and other stakeholders during incidents and other critical events. This package allows you to programmatically access and utilize the xMatters REST API to integrate xMatters functionality into your Go applications. API Documentation: Usage:
Package nxlsclient provides a Go client for interacting with the Nx Language Server Protocol (LSP) server. The nxlsclient package enables Go applications to leverage Nx Console's powerful tooling by embedding the Nx language server directly in the package and providing a clean API for communication. The nxlsclient's primary features include: Creating a client and initializing the LSP connection: After initialization, you can use the Commander to interact with the Nx workspace: The client supports event-based programming through LSP notifications:
Package mitto offers a simple listener API for application events.
Package windowseventtracereceiver implements a receiver that uses the Windows Event Trace (ETW) API to collect events.
Package psg provides an API for launching (scattering) tasks and aggregating (gathering) their results. It separates these stages so that the tasks can run concurrently while aggregation remains sequential. This reduces the wall-clock time required for an overall operation (job) as compared to executing the tasks serially, without adding synchronization complexity to the aggregation logic. Since tasks require resources to execute, and those resources are limited, psg also provides a way to model pools of resources as limits on the number of tasks allowed to run at the same time. Different classes of task, for instance compute-bound or I/O-bound, can be executed in the context of different pools and therefore subject to different concurrency constraints. Non-trivial tasks often involve different stages that use different kinds of resources, for instance I/O to retrieve a chunk of data followed by compute to process it. The psg package therefore allows gather functions to launch new tasks into the same or different pools all within the context of the same job, creating incremental pipelines that are free of unnecessary synchronization barriers between stages and that neither over- nor under-utilize multiple distinct groups of resources. "Hello world" example that uses psg to run a couple of tasks and gather their results. Observable uses psg to run a few tasks and produce logging that demonstrate the sequence of events. Pipeline demonstrates the use of multiple psg pools to re-implement the MD5All function in errgroup's pipeline example, which itself is a re-implementation of the MD5All function described in Go Concurrency Patterns: Pipelines and cancellation.
Package sentry gives you the ability to send events to a Sentry server. It provides a clean API with comprehensive support for Sentry's various interfaces and an easy to remember syntax.
Package gomcp provides a Go implementation of the Model Context Protocol (MCP). The Model Context Protocol (MCP) is a standardized communication protocol designed to facilitate interaction between applications and Large Language Models (LLMs). This library provides a complete Go implementation of the protocol with support for all specification versions (2024-11-05, 2025-03-26, and draft) with automatic version detection and negotiation. Starting with v1.5.0, all public APIs are locked and stable. This library is ready for production use with the following guarantees: - Full MCP protocol implementation - Client and server components - Multiple transport options (stdio, HTTP, WebSocket, Server-Sent Events) - Automatic protocol version negotiation - Comprehensive type safety - Support for all MCP operations: tools, resources, prompts, and sampling - Flexible configuration options - Process management for external MCP servers - Server configuration file support The library is organized into the following main packages: ## Client Example ## Server Configuration Example ## Server Example GOMCP includes robust functionality for managing external MCP server processes: For more information, see the docs/examples/server-config.md documentation. This library implements the Model Context Protocol as defined at: https://github.com/microsoft/modelcontextprotocol For detailed documentation, examples, and specifications, see: https://modelcontextprotocol.github.io/ For more examples, see the examples directory in this repository. gomcp follows semantic versioning. Starting with v1.5.0, the APIs are locked and stable, making this library ready for production use. The current version is available through the Version constant.