Package rod is a high-level driver directly based on DevTools Protocol. This example opens https://github.com/, searches for "git", and then gets the header element which gives the description for Git. Rod use https://golang.org/pkg/context to handle cancellations for IO blocking operations, most times it's timeout. Context will be recursively passed to all sub-methods. For example, methods like Page.Context(ctx) will return a clone of the page with the ctx, all the methods of the returned page will use the ctx if they have IO blocking operations. Page.Timeout or Page.WithCancel is just a shortcut for Page.Context. Of course, Browser or Element works the same way. Shows how we can further customize the browser with the launcher library. Usually you use launcher lib to set the browser's command line flags (switches). Doc for flags: https://peter.sh/experiments/chromium-command-line-switches Shows how to change the retry/polling options that is used to query elements. This is useful when you want to customize the element query retry logic. When rod doesn't have a feature that you need. You can easily call the cdp to achieve it. List of cdp API: https://github.com/go-rod/rod/tree/main/lib/proto Shows how to disable headless mode and debug. Rod provides a lot of debug options, you can set them with setter methods or use environment variables. Doc for environment variables: https://pkg.go.dev/github.com/go-rod/rod/lib/defaults We use "Must" prefixed functions to write example code. But in production you may want to use the no-prefix version of them. About why we use "Must" as the prefix, it's similar to https://golang.org/pkg/regexp/#MustCompile Shows how to share a remote object reference between two Eval. Shows how to listen for events. Shows how to intercept requests and modify both the request and the response. The entire process of hijacking one request: The --req-> and --res-> are the parts that can be modified. Show how to handle multiple results of an action. Such as when you login a page, the result can be success or wrong password. Example_search shows how to use Search to get element inside nested iframes or shadow DOMs. It works the same as https://developers.google.com/web/tools/chrome-devtools/dom#search Shows how to update the state of the current page. In this example we enable the network domain. Rod uses mouse cursor to simulate clicks, so if a button is moving because of animation, the click may not work as expected. We usually use WaitStable to make sure the target isn't changing anymore. When you want to wait for an ajax request to complete, this example will be useful.
Package personalizeevents provides the API client, operations, and parameter types for Amazon Personalize Events. Amazon Personalize can consume real-time user event data, such as stream or click data, and use it for model training either alone or combined with historical data. For more information see Recording item interaction events.
Package cloudwatchlogs provides the client and types for making API requests to Amazon CloudWatch Logs. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon EC2 instances, AWS CloudTrail, or other sources. You can then retrieve the associated log data from CloudWatch Logs using the CloudWatch console, CloudWatch Logs commands in the AWS CLI, CloudWatch Logs API, or CloudWatch Logs SDK. You can use CloudWatch Logs to: Monitor logs from EC2 instances in real-time: You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold that you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as "NullReferenceException") or count the number of occurrences of a literal term at a particular position in log data (such as "404" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. Monitor AWS CloudTrail logged events: You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail and use the notification to perform troubleshooting. Archive log data: You can use CloudWatch Logs to store your log data in highly durable storage. You can change the log retention setting so that any log events older than this setting are automatically deleted. The CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it. See https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28 for more information on this service. See cloudwatchlogs package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatchlogs/ To Amazon CloudWatch Logs with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon CloudWatch Logs client CloudWatchLogs for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatchlogs/#New
Package devopsguru provides the API client, operations, and parameter types for Amazon DevOps Guru. anomalous behavior in business critical operational applications. You specify the Amazon Web Services resources that you want DevOps Guru to cover, then the Amazon CloudWatch metrics and Amazon Web Services CloudTrail events related to those resources are analyzed. When anomalous behavior is detected, DevOps Guru creates an insight that includes recommendations, related events, and related metrics that can help you improve your operational applications. For more information, see What is Amazon DevOps Guru. You can specify 1 or 2 Amazon Simple Notification Service topics so you are notified every time a new insight is created. You can also enable DevOps Guru to generate an OpsItem in Amazon Web Services Systems Manager for each insight to help you manage and track your work addressing insights. To learn about the DevOps Guru workflow, see How DevOps Guru works. To learn about DevOps Guru concepts, see Concepts in DevOps Guru.
Package connectparticipant provides the API client, operations, and parameter types for Amazon Connect Participant Service. Participant Service actions Participant Service data types Amazon Connect is an easy-to-use omnichannel cloud contact center service that enables companies of any size to deliver superior customer service at a lower cost. Amazon Connect communications capabilities make it easy for companies to deliver personalized interactions across communication channels, including chat. Use the Amazon Connect Participant Service to manage participants (for example, agents, customers, and managers listening in), and to send messages and events within a chat contact. The APIs in the service enable the following: sending chat messages, attachment sharing, managing a participant's connection state and message events, and retrieving chat transcripts.
Package greengrassv2 provides the API client, operations, and parameter types for AWS IoT Greengrass V2. IoT Greengrass brings local compute, messaging, data management, sync, and ML inference capabilities to edge devices. This enables devices to collect and analyze data closer to the source of information, react autonomously to local events, and communicate securely with each other on local networks. Local devices can also communicate securely with Amazon Web Services IoT Core and export IoT data to the Amazon Web Services Cloud. IoT Greengrass developers can use Lambda functions and components to create and deploy applications to fleets of edge devices for local operation. IoT Greengrass Version 2 provides a new major version of the IoT Greengrass Core software, new APIs, and a new console. Use this API reference to learn how to use the IoT Greengrass V2 API operations to manage components, manage deployments, and core devices. For more information, see What is IoT Greengrass? in the IoT Greengrass V2 Developer Guide.
Package ioteventsdata provides the API client, operations, and parameter types for AWS IoT Events Data. IoT Events monitors your equipment or device fleets for failures or changes in operation, and triggers actions when such events occur. You can use IoT Events Data API commands to send inputs to detectors, list detectors, and view or update a detector's status. For more information, see What is IoT Events? in the IoT Events Developer Guide.
Package opsworkscm provides the API client, operations, and parameter types for AWS OpsWorks CM. AWS OpsWorks for configuration management (CM) is a service that runs and manages configuration management servers. You can use AWS OpsWorks CM to create and manage AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise servers, and add or remove nodes for the servers to manage. Glossary of terms Server: A configuration management server that can be highly-available. The configuration management server runs on an Amazon Elastic Compute Cloud (EC2) instance, and may use various other AWS services, such as Amazon Relational Database Service (RDS) and Elastic Load Balancing. A server is a generic abstraction over the configuration manager that you want to use, much like Amazon RDS. In AWS OpsWorks CM, you do not start or stop servers. After you create servers, they continue to run until they are deleted. Engine: The engine is the specific configuration manager that you want to use. Valid values in this release include ChefAutomate and Puppet . Backup: This is an application-level backup of the data that the configuration manager stores. AWS OpsWorks CM creates an S3 bucket for backups when you launch the first server. A backup maintains a snapshot of a server's configuration-related attributes at the time the backup starts. Events: Events are always related to a server. Events are written during server creation, when health checks run, when backups are created, when system maintenance is performed, etc. When you delete a server, the server's events are also deleted. Account attributes: Every account has attributes that are assigned in the AWS OpsWorks CM database. These attributes store information about configuration limits (servers, backups, etc.) and your customer account. AWS OpsWorks CM supports the following endpoints, all HTTPS. You must connect to one of the following endpoints. Your servers can only be accessed or managed within the endpoint in which they are created. opsworks-cm.us-east-1.amazonaws.com opsworks-cm.us-east-2.amazonaws.com opsworks-cm.us-west-1.amazonaws.com opsworks-cm.us-west-2.amazonaws.com opsworks-cm.ap-northeast-1.amazonaws.com opsworks-cm.ap-southeast-1.amazonaws.com opsworks-cm.ap-southeast-2.amazonaws.com opsworks-cm.eu-central-1.amazonaws.com opsworks-cm.eu-west-1.amazonaws.com For more information, see AWS OpsWorks endpoints and quotas in the AWS General Reference. All API operations allow for five requests per second with a burst of 10 requests per second.
Package iotevents provides the API client, operations, and parameter types for AWS IoT Events. AWS IoT Events monitors your equipment or device fleets for failures or changes in operation, and triggers actions when such events occur. You can use AWS IoT Events API operations to create, read, update, and delete inputs and detector models, and to list their versions.
Package arczonalshift provides the API client, operations, and parameter types for AWS ARC - Zonal Shift. Welcome to the API Reference Guide for zonal shift and zonal autoshift in Amazon Route 53 Application Recovery Controller (Route 53 ARC). You can start a zonal shift to move traffic for a load balancer resource away from an Availability Zone to help your application recover quickly from an impairment in an Availability Zone. For example, you can recover your application from a developer's bad code deployment or from an Amazon Web Services infrastructure failure in a single Availability Zone. You can also configure zonal autoshift for supported load balancer resources. Zonal autoshift is a capability in Route 53 ARC where you authorize Amazon Web Services to shift away application resource traffic from an Availability Zone during events, on your behalf, to help reduce your time to recovery. Amazon Web Services starts an autoshift when internal telemetry indicates that there is an Availability Zone impairment that could potentially impact customers. To help make sure that zonal autoshift is safe for your application, you must also configure practice runs when you enable zonal autoshift for a resource. Practice runs start weekly zonal shifts for a resource, to shift traffic for the resource away from an Availability Zone. Practice runs help you to make sure, on a regular basis, that you have enough capacity in all the Availability Zones in an Amazon Web Services Region for your application to continue to operate normally when traffic for a resource is shifted away from one Availability Zone. Before you configure practice runs or enable zonal autoshift, we strongly recommend that you prescale your application resource capacity in all Availability Zones in the Region where your application resources are deployed. You should not rely on scaling on demand when an autoshift or practice run starts. Zonal autoshift, including practice runs, works independently, and does not wait for auto scaling actions to complete. Relying on auto scaling, instead of pre-scaling, can result in loss of availability. If you use auto scaling to handle regular cycles of traffic, we strongly recommend that you configure the minimum capacity of your auto scaling to continue operating normally with the loss of an Availability Zone. Be aware that Route 53 ARC does not inspect the health of individual resources. Amazon Web Services only starts an autoshift when Amazon Web Services telemetry detects that there is an Availability Zone impairment that could potentially impact customers. In some cases, resources might be shifted away that are not experiencing impact. For more information about using zonal shift and zonal autoshift, see the Amazon Route 53 Application Recovery Controller Developer Guide.
Package eventsource implements a client and server to allow streaming data one-way over a HTTP connection using the Server-Sent Events API http://dev.w3.org/html5/eventsource/ The client and server respect the Last-Event-ID header. If the Repository interface is implemented on the server, events can be replayed in case of a network disconnection.
Package secretsmanager provides the client and types for making API requests to AWS Secrets Manager. AWS Secrets Manager is a web service that enables you to store, manage, and retrieve, secrets. This guide provides descriptions of the Secrets Manager API. For more information about using this service, see the AWS Secrets Manager User Guide (http://docs.aws.amazon.com/secretsmanager/latest/userguide/introduction.html). This version of the Secrets Manager API Reference documents the Secrets Manager API version 2017-10-17. As an alternative to using the API directly, you can use one of the AWS SDKs, which consist of libraries and sample code for various programming languages and platforms (such as Java, Ruby, .NET, iOS, and Android). The SDKs provide a convenient way to create programmatic access to AWS Secrets Manager. For example, the SDKs take care of cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services (http://aws.amazon.com/tools/). We recommend that you use the AWS SDKs to make programmatic API calls to Secrets Manager. However, you also can use the Secrets Manager HTTP Query API to make direct calls to the Secrets Manager web service. To learn more about the Secrets Manager HTTP Query API, see Making Query Requests (http://docs.aws.amazon.com/secretsmanager/latest/userguide/query-requests.html) in the AWS Secrets Manager User Guide. Secrets Manager supports GET and POST requests for all actions. That is, the API doesn't require you to use GET for some actions and POST for others. However, GET requests are subject to the limitation size of a URL. Therefore, for operations that require larger sizes, use a POST request. We welcome your feedback. Send your comments to awssecretsmanager-feedback@amazon.com (mailto:awssecretsmanager-feedback@amazon.com), or post your feedback and questions in the AWS Secrets Manager Discussion Forum (http://forums.aws.amazon.com/forum.jspa?forumID=296). For more information about the AWS Discussion Forums, see Forums Help (http://forums.aws.amazon.com/help.jspa). The JSON that AWS Secrets Manager expects as your request parameters and that the service returns as a response to HTTP query requests are single, long strings without line breaks or white space formatting. The JSON shown in the examples is formatted with both line breaks and white space to improve readability. When example input parameters would also result in long strings that extend beyond the screen, we insert line breaks to enhance readability. You should always submit the input as a single JSON text string. AWS Secrets Manager supports AWS CloudTrail, a service that records AWS API calls for your AWS account and delivers log files to an Amazon S3 bucket. By using information that's collected by AWS CloudTrail, you can determine which requests were successfully made to Secrets Manager, who made the request, when it was made, and so on. For more about AWS Secrets Manager and its support for AWS CloudTrail, see Logging AWS Secrets Manager Events with AWS CloudTrail (http://docs.aws.amazon.com/secretsmanager/latest/userguide/monitoring.html#monitoring_cloudtrail) in the AWS Secrets Manager User Guide. To learn more about CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/what_is_cloud_trail_top_level.html). See https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17 for more information on this service. See secretsmanager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/secretsmanager/ To AWS Secrets Manager with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Secrets Manager client SecretsManager for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/secretsmanager/#New
Package gopoet is a library to assist with generating Go code. It includes a model of the Go language that is simpler, and thus easier to work with, than those provided by the "go/ast" and "go/types" packages. It also provides adapter methods to allow simple interoperability with elements from the "go/types" and "reflect" packages. The Go Poet API and functionality is strongly influenced by a similar library for Java (for generating Java code) named Java Poet (https://github.com/square/javapoet). TypeName is the way Go Poet represents Go types. There is API in this package for constructing TypeName instances and for converting type representations from the "go/types" and "reflect" packages to TypeName values. It includes related types for representing function signatures, struct fields, and interface methods (Signature, FieldSpec, and MethodSpec respectively). GoFile is the root type in Go Poet for building a representation of Go language elements. The GoFile represents the file itself. The FileElement (and its various concrete implementations) represent top-level declarations in the file. And types like FieldSpec, InterfaceEmbed, and InterfaceMethod represent the elements that comprise struct and interface type definitions. Statements and expressions are not modeled by the Go Poet API, so function bodies and const and var initializers are represented with a type named CodeBlock. Usage of Go Poet involves constructing a GoFile, filling it with elements, and then using the various WriteGoFile* methods to then translate these models into Go source code. Import statements need not be defined manually. GoFile embeds a type named Imports which assists with managing import statements. It tracks all packages that are referenced, generating import aliases as necessary in the event of conflicts. After all referenced packages have been resolved, gopoet.Imports can then generate the import statements necessary. It also provides API for re-writing various references, to adjust their package qualifier so that references to elements or types in other packages are interpolated into Go source code with the correct qualifiers. The lowest level building blocks for the above API are representations of packages, symbols (references to named package-level elements, like consts, vars, types, and funcs), and method references (like a func symbol, but also includes a type qualifier, not just a package qualifier). Various parts of the API provide methods for accessing/converting to these types. Under the hood, it is packages and symbols that are re-written by an Imports instance to ensure all referenced elements are rendered with the package qualifier (e.g. the package name or associated import alias). Go Poet does not attempt to model Go statements and expressions or provide any way to create structured representations of function and method bodies. This is very similar to Java Poet *except* that Go Poet does not provide a custom mechanism for printing and formatting code. It instead relies on the existing facilities in the "fmt" and "text/template" packages. This package provides several types for modeling elements of the Go language that can then be referenced in code blocks (via "%s" or "%v" format specifiers or as elements of a data value rendered by a template). The CodeBlock type and related methods include API that resembles the various Print* functions in the "fmt" package. Before these are rendered to source code, references to Go elements and types are translated to account for the import statements (and any associated aliases) for the file context into which they are being rendered. Format arguments can also include instances of reflect.Type or even items from the "go/types" package: types.Type, types.Object, and *types.Package. These types of values will result in proper references to these elements when the code is actually rendered. Similarly, templates can be rendered, and the data value supplied to the template will be reconstructed, with any elements therein being first translated to have the right package qualifiers. As described above, code blocks (which represent function and method bodies and initializer expressions) can be rendered from templates and provided data values that the template renders. It is also possible to completely eschew modeling generated code with various elements and to generate a file completely from a template. In this case, you can still get value from Go Poet by using a *gopoet.Imports type to track imported packages and assign aliases, and then render the resulting []gopoet.ImportSpec from your template. Furthermore, the value that the template renders can contain instances of gopoet.TypeName, gopoet.Package, and gopoet.Symbol, just like when rendering code blocks for function bodies Calling imports.QualifyTemplateData(data) will re-write the values in the data value so they are properly qualified per the imported packages. Do this before rendering the template. One limitation of re-writing template data is that it cannot change the *types* of elements except in limited circumstances. For example, a Type from the "go/types" package cannot be converted to a gopoet.TypeName if the reference is a struct field whose type is types.Type (since gopoet.TypeName does not implement types.Type). Because of this, not all referenced types and elements can be re-written so may not be rendered correctly. For this reason, it is recommended to use gopoet.TypeName as the means of referring to types in a template data value, not types.Type or reflect.Type.
Package mailgun provides methods for interacting with the Mailgun API. It automates the HTTP request/response cycle, encodings, and other details needed by the API. This SDK lets you do everything the API lets you, in a more Go-friendly way. For further information please see the Mailgun documentation at http://documentation.mailgun.com/ This document includes a number of examples which illustrates some aspects of the GUI which might be misleading or confusing. All examples included are derived from an acceptance test. Note that every SDK function has a corresponding acceptance test, so if you don't find an example for a function you'd like to know more about, please check the acceptance sub-package for a corresponding test. Of course, contributions to the documentation are always welcome as well. Feel free to submit a pull request or open a Github issue if you cannot find an example to suit your needs. Many SDK functions consume a pair of parameters called limit and skip. These help control how much data Mailgun sends over the wire. Limit, as you'd expect, gives a count of the number of records you want to receive. Note that, at present, Mailgun imposes its own cap of 100, for all API endpoints. Skip indicates where in the data set you want to start receiving from. Mailgun defaults to the very beginning of the dataset if not specified explicitly. If you don't particularly care how much data you receive, you may specify DefaultLimit. If you similarly don't care about where the data starts, you may specify DefaultSkip. Functions which accept a limit and skip setting, in general, will also return a total count of the items returned. Note that this total count is not the total in the bundle returned by the call. You can determine that easily enough with Go's len() function. The total that you receive actually refers to the complete set of data on the server. This total may well exceed the size returned from the API. If this happens, you may find yourself needing to iterate over the dataset of interest. For example: Copyright (c) 2013-2014, Michael Banzon. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the names of Mailgun, Michael Banzon, nor the names of their contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Package log implements a simple structured logging API designed with few assumptions. Designed for centralized logging solutions such as Kinesis which require encoding and decoding before fanning-out to handlers. You may use this package with inline handlers, much like Logrus, however a centralized solution is recommended so that apps do not need to be re-deployed to add or remove logging service providers. Errors are passed to WithError(), populating the "error" field. Multiple fields can be set, via chaining, or WithFields(). Structured logging is supported with fields, and is recommended over the formatted message variants. Trace can be used to simplify logging of start and completion events, for example an upload which may fail. Unstructured logging is supported, but not recommended since it is hard to query.
Package log implements a simple structured logging API designed with few assumptions. Designed for centralized logging solutions such as Kinesis which require encoding and decoding before fanning-out to handlers. You may use this package with inline handlers, much like Logrus, however a centralized solution is recommended so that apps do not need to be re-deployed to add or remove logging service providers. Errors are passed to WithError(), populating the "error" field. Multiple fields can be set, via chaining, or WithFields(). Structured logging is supported with fields, and is recommended over the formatted message variants. Trace can be used to simplify logging of start and completion events, for example an upload which may fail. Unstructured logging is supported, but not recommended since it is hard to query.
Package mailgun provides methods for interacting with the Mailgun API. It automates the HTTP request/response cycle, encodings, and other details needed by the API. This SDK lets you do everything the API lets you, in a more Go-friendly way. For further information please see the Mailgun documentation at http://documentation.mailgun.com/ This document includes a number of examples which illustrates some aspects of the GUI which might be misleading or confusing. All examples included are derived from an acceptance test. Note that every SDK function has a corresponding acceptance test, so if you don't find an example for a function you'd like to know more about, please check the acceptance sub-package for a corresponding test. Of course, contributions to the documentation are always welcome as well. Feel free to submit a pull request or open a Github issue if you cannot find an example to suit your needs. Many SDK functions consume a pair of parameters called limit and skip. These help control how much data Mailgun sends over the wire. Limit, as you'd expect, gives a count of the number of records you want to receive. Note that, at present, Mailgun imposes its own cap of 100, for all API endpoints. Skip indicates where in the data set you want to start receiving from. Mailgun defaults to the very beginning of the dataset if not specified explicitly. If you don't particularly care how much data you receive, you may specify DefaultLimit. If you similarly don't care about where the data starts, you may specify DefaultSkip. Functions which accept a limit and skip setting, in general, will also return a total count of the items returned. Note that this total count is not the total in the bundle returned by the call. You can determine that easily enough with Go's len() function. The total that you receive actually refers to the complete set of data on the server. This total may well exceed the size returned from the API. If this happens, you may find yourself needing to iterate over the dataset of interest. For example: Copyright (c) 2013-2014, Michael Banzon. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the names of Mailgun, Michael Banzon, nor the names of their contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Package logr defines abstract interfaces for logging. Packages can depend on these interfaces and callers can implement logging in whatever way is appropriate. This design derives from Dave Cheney's blog: This is a BETA grade API. Until there is a significant 2nd implementation, I don't really know how it will change. The logging specifically makes it non-trivial to use format strings, to encourage attaching structured information instead of unstructured format strings. Logging is done using a Logger. Loggers can have name prefixes and named values attached, so that all log messages logged with that Logger have some base context associated. The term "key" is used to refer to the name associated with a particular value, to disambiguate it from the general Logger name. For instance, suppose we're trying to reconcile the state of an object, and we want to log that we've made some decision. With the traditional log package, we might write With logr's structured logging, we'd write Depending on our logging implementation, we could then make logging decisions based on field values (like only logging such events for objects in a certain namespace), or copy the structured information into a structured log store. For logging errors, Logger has a method called Error. Suppose we wanted to log an error while reconciling. With the traditional log package, we might write With logr, we'd instead write This functions similarly to: However, it ensures that a standard key for the error value ("error") is used across all error logging. Furthermore, certain implementations may choose to attach additional information (such as stack traces) on calls to Error, so it's preferred to use Error to log errors. Each log message from a Logger has four types of context: logger name, log verbosity, log message, and the named values. The Logger name constists of a series of name "segments" added by successive calls to WithName. These name segments will be joined in some way by the underlying implementation. It is strongly reccomended that name segements contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Log verbosity represents how little a log matters. Level zero, the default, matters most. Increasing levels matter less and less. Try to avoid lots of different verbosity levels, and instead provide useful keys, logger names, and log messages for users to filter on. It's illegal to pass a log level below zero. The log message consists of a constant message attached to the the log line. This should generally be a simple description of what's occuring, and should never be a format string. Variable information can then be attached using named values (key/value pairs). Keys are arbitrary strings, while values may be any Go value. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when neccessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values).
Package pagerduty is a Go API client for both the PagerDuty v2 REST and Events API. Most methods should be implemented, and it's recommended to use the WithContext variant of each method and to specify a context with a timeout. To debug responses from the API, you can instruct the client to capture the last response from the API. Please see the documentation for the SetDebugFlag() and LastAPIResponse() methods for more details.
Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: https://github.com/BurntSushi/xgbutil This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it.
Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: https://github.com/BurntSushi/xgbutil This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it.
Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: https://github.com/BurntSushi/xgbutil This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it.
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package duit is a pure go, cross-platform, MIT-licensed, UI toolkit for developers. The examples/ directory has small code examples for working with duit and its UIs. Examples are the recommended starting point. Start with NewDUI to create a DUI: essentially a window and all the UI state. The user interface consists of a hierarchy of "UIs" like Box, Scroll, Button, Label, etc. They are called UIs, after the interface UI they all implement. The zero structs for UIs have sane default behaviour so you only have to fill in the fields you need. UIs are kept/wrapped in a Kid, to track their layout/draw state. Use NewKids() to build up the UIs for your application. You won't see much of the Kid-types/functions otherwise, unless you implement a new UI. You are in charge of the main event loop, receiving mouse/keyboard/window events from the dui.Inputs channel, and typically passing them on unchanged to dui.Input. All callbacks and functions on UIs are called from inside dui.Input. From there you can also safely change the the UIs, no locking required. After changing a UI you are responsible for calling MarkLayout or MarkDraw to tell duit the UI needs a new layout or draw. This may sound like more work, but this tradeoff keeps the API small and easy to use. If you need to change the UI from a goroutine outside of the main loop, e.g. for blocking calls, you can send a function that makes those modifications on the dui.Call channel, which will be run on the main channel through dui.Inputs. After handling an input, duit will layout or draw as necessary, no need to render explicitly. Embedding a UI into your own data structure is often an easy way to build up UI hiearchies. Scroll and Edit show a scrollbar. Use button 1 on the scrollbar to scroll up, button 3 to scroll down. If you click more near the top, you scroll less. More near the bottom, more. Button 2 scrolls to the absolute place, where you clicked. Button 4 and 5 are wheel up and wheel down, and also scroll less/more depending on position in the UI.
Package ivsrealtime provides the API client, operations, and parameter types for Amazon Interactive Video Service RealTime. The Amazon Interactive Video Service (IVS) real-time API is REST compatible, using a standard HTTP API and an AWS EventBridge event stream for responses. JSON is used for both requests and responses, including errors. Key Concepts Stage — A virtual space where participants can exchange video in real time. Participant token — A token that authenticates a participant when they join a stage. Participant object — Represents participants (people) in the stage and contains information about them. When a token is created, it includes a participant ID; when a participant uses that token to join a stage, the participant is associated with that participant ID. There is a 1:1 mapping between participant tokens and participants. For server-side composition: Composition process — Composites participants of a stage into a single video and forwards it to a set of outputs (e.g., IVS channels). Composition operations support this process. Composition — Controls the look of the outputs, including how participants are positioned in the video. For more information about your IVS live stream, also see Getting Started with Amazon IVS Real-Time Streaming. A tag is a metadata label that you assign to an AWS resource. A tag comprises a key and a value, both set by you. For example, you might set a tag as topic:nature to label a particular video category. See Best practices and strategies in Tagging AWS Resources and Tag Editor for details, including restrictions that apply to tags and "Tag naming limits and requirements"; Amazon IVS stages has no service-specific constraints beyond what is documented there. Tags can help you identify and organize your AWS resources. For example, you can use the same tag for different resources to indicate that they are related. You can also use tags to manage access (see Access Tags). The Amazon IVS real-time API has these tag-related operations: TagResource, UntagResource, and ListTagsForResource. The following resource supports tagging: Stage. At most 50 tags can be applied to a resource.
Package arikawa contains a set of modular packages that allows you to make a Discord bot or any type of session (OAuth unsupported). Package session is the most simple abstraction, which combines the API package and the Gateway websocket package together into one. This could be used for minimal bots that only use gateway events and such. Package state abstracts on top of session and provides a local cache of API calls and events. Bots that either don't need a command router or already has its own should use this package. Package bot abstracts on top of state and provides a command router based on Go code. This is similar to discord.py's API, only it's Go and there's no optional arguments (yet, although it could be worked around). Most bots are recommended to use this package, as it's the easiest way to make a bot. Package voice provides an abstraction on top of State and adds voice support. This allows bots to join voice channels and talk. The package uses an io.Writer approach rather than a channel, contrary to other Discord libraries.
Package arikawa contains a set of modular packages that allows you to make a Discord bot or any type of session (OAuth unsupported). Package session is the most simple abstraction, which combines the API package and the Gateway websocket package together into one. This could be used for minimal bots that only use gateway events and such. Package state abstracts on top of session and provides a local cache of API calls and events. Bots that either don't need a command router or already has its own should use this package. Package bot abstracts on top of state and provides a command router based on Go code. This is similar to discord.py's API, only it's Go and there's no optional arguments (yet, although it could be worked around). Most bots are recommended to use this package, as it's the easiest way to make a bot. Package voice provides an abstraction on top of State and adds voice support. This allows bots to join voice channels and talk. The package uses an io.Writer approach rather than a channel, contrary to other Discord libraries.
Package bugsnag captures errors in real-time and reports them to BugSnag (http://bugsnag.com). Using bugsnag-go is a three-step process. 1. As early as possible in your program configure the notifier with your APIKey. This sets up handling of panics that would otherwise crash your app. 2. Add bugsnag to places that already catch panics. For example you should add it to the HTTP server when you call ListenAndServer: If that's not possible, you can also wrap each HTTP handler manually: 3. To notify BugSnag of an error that is not a panic, pass it to bugsnag.Notify. This will also log the error message using the configured Logger. For detailed integration instructions see https://docs.bugsnag.com/platforms/go. The only required configuration is the BugSnag API key which can be obtained by clicking "Project Settings" on the top of your BugSnag dashboard after signing up. We also recommend you set the ReleaseStage, AppType, and AppVersion if these make sense for your deployment workflow. If you need to attach extra data to BugSnag events, you can do that using the rawData mechanism. Most of the functions that send errors to BugSnag allow you to pass in any number of interface{} values as rawData. The rawData can consist of the Severity, Context, User or MetaData types listed below, and there is also builtin support for *http.Requests. If you want to add custom tabs to your bugsnag dashboard you can pass any value in as rawData, and then process it into the event's metadata using a bugsnag.OnBeforeNotify() hook. If necessary you can pass Configuration in as rawData, or modify the Configuration object passed into OnBeforeNotify hooks. Configuration passed in this way only affects the current notification.
Package bugsnag captures errors in real-time and reports them to Bugsnag (http://bugsnag.com). Using bugsnag-go is a three-step process. 1. As early as possible in your program configure the notifier with your APIKey. This sets up handling of panics that would otherwise crash your app. 2. Add bugsnag to places that already catch panics. For example you should add it to the HTTP server when you call ListenAndServer: If that's not possible, you can also wrap each HTTP handler manually: 3. To notify Bugsnag of an error that is not a panic, pass it to bugsnag.Notify. This will also log the error message using the configured Logger. For detailed integration instructions see https://bugsnag.com/docs/notifiers/go. The only required configuration is the Bugsnag API key which can be obtained by clicking "Settings" on the top of https://bugsnag.com/ after signing up. We also recommend you set the ReleaseStage, AppType, and AppVersion if these make sense for your deployment workflow. If you need to attach extra data to Bugsnag notifications you can do that using the rawData mechanism. Most of the functions that send errors to Bugsnag allow you to pass in any number of interface{} values as rawData. The rawData can consist of the Severity, Context, User or MetaData types listed below, and there is also builtin support for *http.Requests. If you want to add custom tabs to your bugsnag dashboard you can pass any value in as rawData, and then process it into the event's metadata using a bugsnag.OnBeforeNotify() hook. If necessary you can pass Configuration in as rawData, or modify the Configuration object passed into OnBeforeNotify hooks. Configuration passed in this way only affects the current notification.
Package harmony provides an interface to the Discord API (https://discord.com/developers/docs/intro). The first thing you do with Harmony is to create a Client. NewClient does just that by returning a new Client pre-configured with sain defaults which should work fine in most cases. However, should you need a more specific configuration, you can always tweak it with optional `ClientOption`s. See the documentation of NewClient and the ClientOption type for more information on how to do so. Once you have a Client, you can start interacting with the Discord API, but some methods (such as event handlers) won't be available until you connect to Discord's Gateway. You can do so by simply calling the Connect method of the Client: It is only when successfully connected to the Gateway that your bot will appear as online and your Client will be able to receive events and send messages. Harmony's HTTP API is organized by resource. A resource maps to a core concept in the Discord world, such as a User or a Channel. Here is the list of resources you can interact with: Every interaction you can have with a resource can be accessed via methods attached to it. For example, if you wish to send a message to a channel, first access to the desired channel resource, then send the message: Endpoints that do not fall into one of those resource (creating a Guild for example, or getting valid Voice Regions) are directly available on the Client. To receive messages, use the OnMessageCreate method and give it your handler. It will be called each time a message is sent to a channel your bot is in with the message as a parameter. To register handlers for other types of events, see Client.On* methods. Note that your handlers are called in their own goroutine, meaning whatever you do inside of them won't block future events. When connecting to Discord, a session state is created with initial data sent by Discord's Gateway. As events are received by the client, this state is constantly updated so it always have the newest data available. This session state acts as a cache to avoid making requests over the HTTP API each time. If you need to get information about the current user, you can simply query the current state like so: Because this state might become memory hungry for bots that are in a very large number of servers, you can fine-tune events you want to track with the WithGatewayIntents option. State can also be completely disabled using the WithStateTracking option while creating the harmony client.
Package metrics provides atomic measures and Prometheus exposition. Counter, Integer, Real and Histogram are live representations of events. Value updates should be part of the respective implementation. Otherwise, use Sample for captures with a timestamp. The Must functions deal with registration. Their use is intended for setup during application launch only. All metrics are permanent-the API offers no deletion by design.
Package res is used to create REST, real time, and RPC APIs, where all your reactive web clients are synchronized seamlessly through Resgate: https://github.com/resgateio/resgate The implementation provides structs and methods for creating services that listen to requests and send events over NATS server. Requests are handled concurrently for multiple resources, but the package guarantees that only one goroutine is executing handlers for any unique resource at any one time. This allows handlers to modify models and collections without additional synchronization such as mutexes. Create a new service: Add handlers for a model resource: Add handlers for a collection resource: Add handlers for parameterized resources: Add handlers for method calls: Send change event on model update: Send add event on collection update: Add handlers for authentication: Add handlers for access control: Start service:
Package log is a drop-in replacement for the standard Go logging library "log" which is fully source code compatible support all the standard library API while at the same time offering advanced logging features through an extended API. The design goals of gonelog was: Out of the box the default logger with package level methods works like the standard library *log.Logger with all the standard flags and methods: Under the hood the default *log.Logger is however a log context object which can have key/value data and which generate log events with a syslog level and hands them of to a log Handler for formatting and output. The default Logger supports all this as well, using log level constants source code compatible with the "log/syslog" package through the github.com/One-com/gone/log/syslog package: Logging with key/value data is (in its most simple form) done by calling level specific functions. First argument is the message, subsequenct arguments key/value data: Earch *log.Logger object has a current "log level" which determines the maximum log level for which events are actually generated. Logging above that level will be ignored. This log level can be controlled: Calling Fatal*() and Panic*() will in addition to Fataling/panicing log at level ALERT. The Print*() methods will log events with a configurable "default" log level - which default to INFO. Per default the Logger *will* generate log event for Print*() calls even though the log level is lower. The Logger can be set to respect the actual log level also for Print*() statements by the second argument to SetPrintLevel() A new custom Logger with its own behavior and formatting handler can be created: A customer Logger will not per default spend time timestamping events or registring file/line information. You have to enable that explicitly (it's not enabled by setting the flags on a formatting handler). When having key/value data which you need to have logged in all log events, but don't want to remember to put into every log statement, you can create a "child" Logger: To simply set the standard logger in a minimal mode where it only outputs <level>message to STDOUT and let an external daemon supervisor/log system do the rest (including timestamping) just do: Having many log statements can be expensive. Especially if the arguments to be logged are resource intensive to compute and there's no log events generated anyway. There are 2 ways to get around that. The first is do do Lazy evaluation of arguments: The other is to pick an comma-ok style log function: Sometimes it can be repetitive to make a lot of log statements logging many attributes of the same kinda of object by explicitly accessing every attribute. To make that simpler, every object can implement the Logable interface by creating a LogValues() function returning the attributes to be logged (with keys). The object can then be logged by directly providing it as an argument to a log function: Loggers can have names, placing them in a global "/" separated hierarchy. It's recommended to create a Logger by mentioning it by name using GetLogger("logger/name") - instead of creating unnamed Loggers with NewLogger(). If such a logger exists you will get it returned, so you can configure it and set the formatter/output. Otherwise a new logger by that name is created. Libraries are encouraged to published the names of their Loggers and to name Loggers after their Go package. This works exactly like the Python "logging" library - with one exception: When Logging an event at a Logger the tree of Loggers by name are only traversed towards to root to find the first Logger having a Handler attached, not returning an error. The log-event is then sent to that handler. If that handler returns an error, the parent Logger and its Handler is tried. This allows to contruct a "Last Resort" parent for errors in the default log Handler. The Python behaviour is to send the event to all Handlers found in the Logger tree. This is not the way it's done here. Only one Handler will be given the event to log. If you wan't more Handlers getting the event, use a MultiHandler. Happy logging.
Package rpcclient implements a websocket-enabled Decred JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Decred RPC server that uses a mostly btcd/bitcoin core style Decred JSON-RPC API. This client has been tested with dcrd (https://github.com/Decred-Next/dcrnd) and dcrwallet (https://github.com/decred/dcrwallet). In addition to the compatible standard HTTP POST JSON-RPC API, dcrd and dcrwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to dcrd or dcrwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by dcrd and dcrwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by dcrd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by dcrd (and dcrwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commands are extensions in their documentation. Also, it is important to realize that dcrd intentionally separates the wallet functionality into a separate process named dcrwallet. This means if you are connected to the dcrd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, dcrwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *dcrjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package dcrrpcclient implements a websocket-enabled Decred JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Decred RPC server that uses a mostly btcd/bitcoin core style Decred JSON-RPC API. This client has been tested with dcrd (https://github.com/decred/dcrd) and dcrwallet (https://github.com/decred/dcrwallet). In addition to the compatible standard HTTP POST JSON-RPC API, dcrd and dcrwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to dcrd or dcrwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by dcrd and dcrwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by dcrd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by dcrd (and dcrwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that dcrd intentionally separates the wallet functionality into a separate process named dcrwallet. This means if you are connected to the dcrd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, dcrwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *dcrjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset provides a common approach for storing JSON object documents on local disc. It is intended as a single user system for intermediate processing of JSON content for analysis or batch processing. It is not a database management system (if you need a JSON database system I would suggest looking at Couchdb, Mongo and Redis as a starting point). The approach dataset takes is to store JSON documents in a pairtree structure under the collection folder. The keys are the JSON document names. JSON documents (and possibly their attachments) are then stored based on that assignment in the pairtree. Conversely the collection.json document is used to find and retrieve documents from the collection. The layout of the metadata is as follows + Collection - a directory A key feature of dataset is to be Posix shell friendly. This has lead to storing the JSON documents in a directory structure that standard Posix tooling can traverse. It has also mean that the JSON documents themselves remain on "disc" as plain text. This has facilitated integration with many other applications, programming langauages and systems. Attachments are non-JSON documents explicitly "attached" that share the same pairtree path but are placed in a sub directory called "_". If the document name is "Jane.Doe.json" and the attachment is photo.jpg the JSON document is "pairtree/Ja/ne/.D/e./Jane.Doe.json" and the photo is in "pairtree/Ja/ne/.D/e./_/photo.jpg". Additional operations beside storing and reading JSON documents are also supported. These include creating lists (arrays) of JSON documents from a list of keys, listing keys in the collection, counting documents in the collection, indexing and searching by indexes. The primary use case driving the development of dataset is harvesting API content for library systems (e.g. EPrints, Invenio, ArchivesSpace, ORCID, CrossRef, OCLC). The harvesting needed to be done in such a way as to leverage existing Posix tooling (e.g. grep, sed, etc) for processing and analysis. Initial use case: Caltech Library has many repository, catelog and record management systems (e.g. EPrints, Invenion, ArchivesSpace, Islandora, Invenio). It is common practice to harvest data from these systems for analysis or processing. Harvested records typically come in XML or JSON format. JSON has proven a flexibly way for working with the data and in our more modern tools the common format we use to move data around. We needed a way to standardize how we stored these JSON records for intermediate processing to allow us to use the growing ecosystem of JSON related tooling available under Posix/Unix compatible systems. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Package libaudit is a client library used for interfacing with the Linux kernel auditing framework. It provides an API for executing audit related tasks such as setting audit rules, changing the auditing configuration, and processing incoming audit events. The intent for this package is to provide a means for an application to take the role of auditd, for consumption and analysis of audit events in your go program.
Package instalambda provides Instana tracing instrumentation for AWS Lambda functions This example demonstrates how to instrument a handler function with Instana This example demonstrates how to instrument a handler function invoked with an API Gateway event
Package scylla implements an efficient shard-aware driver for ScyllaDB. Pass a keyspace and a list of initial node IP addresses to DefaultSessionConfig to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Scylla config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Scylla will use the configured IP address, which is used to index connected hosts. Then you can customize more options (see SessionConfig): When ready, create a session from the configuration and context.Context, once the context is done session will close automatically, stopping requests from being sent and new connections from being made. Don't forget to Close the session once you are done with it and not sure context will be done: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. Currently the driver supports only default password authenticator which can be used like this: It is possible to secure traffic between the client and server with TLS, to do so just pass your tls.Config to session config. For example: The driver by default will route prepared queries to nodes that hold data replicas based on partition key, and non-prepared queries in a round-robin fashion. To route queries to local DC first, use TokenAwareDCAwarePolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use Create queries with Session.Query. Query values can be reused between different but must not be modified during executions of the query. To execute a query use Query.Exec: Result rows can be read like this See Example for complete example. The driver can prepare DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements). CQL protocol does not support preparing other query types. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. The driver supports paging of results with automatic prefetch of 1 page, see Query.PageSize and Query.Iter. It is also possible to control the paging manually with Query.PageState. Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned in Result.PageState by Query.Exec to Query.PageState of a subsequent query to get the next page. If the length of slice in Result.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Scylla returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. If you need to use a custom Retry or HostSelectionPolicy please see the transport package documentation.
Package eventbus provides a flexible event-driven messaging system for the modular framework. This module enables decoupled communication between application components through an event bus pattern. It supports both synchronous and asynchronous event processing, multiple event bus engines, and configurable event handling strategies. The eventbus module offers the following capabilities: The module can be configured through the EventBusConfig structure: The module registers itself as a service for dependency injection: Basic event publishing: Event subscription patterns: Subscription management: The module supports different event processing patterns: **Synchronous Processing**: Events are processed immediately in the same goroutine that published them. Best for lightweight operations and when ordering is important. **Asynchronous Processing**: Events are queued and processed by worker goroutines. Best for heavy operations, external API calls, or when you don't want to block the publisher. Currently supported engines: