Package pusher is the Golang library for interacting with the Pusher HTTP API. This package lets you trigger events to your client and query the state of your Pusher channels. When used with a server, you can validate Pusher webhooks and authenticate private- or presence-channels. In order to use this library, you need to have a free account on http://pusher.com. After registering, you will need the application credentials for your app. To create a new client, pass in your application credentials to a `pusher.Client` struct: To start triggering events on a channel, we call `pusherClient.Trigger`: Read on to see what more you can do with this library, such as authenticating private- and presence-channels, validating Pusher webhooks, and querying the HTTP API to get information about your channels. Author: Jamie Patel, Pusher
Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates after 1 hour, 30 minutes, 10 seconds, and then every interval after that. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. All interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Package cloud is the root of the packages used to access Google Cloud Services. See https://godoc.org/cloud.google.com/go for a full list of sub-packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in this package for details. By default, all requests in sub-packages will run indefinitely, retrying on transient errors when correctness allows. To set timeouts or arrange for cancellation, use contexts. See the examples for details. Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP (bigquery, compute, storage, and translate) rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the default http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport. For gRPC clients (all others in this repo), connection pooling is configurable. Users of cloud client libraries may specify option.WithGRPCConnectionPool(n) as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and addressed in a round robin fashion. Minimal docker images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/GoogleCloudPlatform/google-cloud-go/issues/928 for more information. To see gRPC logs, set the environment variable GRPC_GO_LOG_SEVERITY_LEVEL. See https://godoc.org/google.golang.org/grpc/grpclog for more information. For HTTP logging, set the GODEBUG environment variable to "http2debug=1" or "http2debug=2". Google Application Default Credentials is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://developers.google.com/identity/protocols/application-default-credentials. To arrange for an RPC to be canceled, use context.WithCancel. You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.developers.google.com/permissions/serviceaccounts. This example uses the Datastore client, but the same steps apply to the other client libraries underneath this package. In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. The google package in this example is at golang.org/x/oauth2/google. This example uses the PubSub client, but the same steps apply to the other client libraries underneath this package. To set a timeout for an RPC, use context.WithTimeout.
Package namecheap implements a client for the Namecheap API. In order to use this package you will need a Namecheap account and your API Token.
Package gotransip implements a client for the TransIP Rest API. This package is a complete implementation for communicating with the TransIP RestAPI. It covers resource calls available in the TransIP RestAPI Docs and it allows your project(s) to connect to the TransIP RestAPI easily. Using this package you can order, update and remove products from your TransIP account. As of version 6.0 this package is no longer compatible with TransIP SOAP API because the library is now organized around REST. The SOAP API library versions 5.* are now deprecated and will no longer receive future updates. The following example uses the transip demo token in order to call the api with the test repository. For more information about authenticating with your own credentials, see the Authentication section. If you want to tinker out with the api first without setting up your authentication, we defined a static DemoClientConfiguration. Which can be used to create a new client: Create a new client using a token: As tokens have a limited expiry time you can also request new tokens using the private key acquired from your transip control panel: We also implemented a PrivateKeyReader option, for users that want to store their key elsewhere, not on a filesystem but on X datastore: If you would like to keep a token between multiple client instantiations, you can provide the client with a token cache. If the file does not exist, it creates one for you As long as a provided TokenCache adheres to the following interface, the client's authenticator is able to use it. This means you can also provide your own token cacher: for example, one that caches to etcd All resource calls as can be seen on https://api.transip.nl/rest/docs.html have been grouped in the following repositories, these are subpackages under the gotransip package: Such a repository can be initialised with a client as follows: Each repository has a bunch methods you can use to call get/modify/update resources in that specific subpackage. For example, here we get a list of domains from a transip account:
Package ivs provides the API client, operations, and parameter types for Amazon Interactive Video Service. The Amazon Interactive Video Service (IVS) API is REST compatible, using a standard HTTP API and an Amazon Web Services EventBridge event stream for responses. JSON is used for both requests and responses, including errors. The API is an Amazon Web Services regional service. For a list of supported regions and Amazon IVS HTTPS service endpoints, see the Amazon IVS pagein the Amazon Web Services General Reference. All API request parameters and URLs are case sensitive. For a summary of notable documentation changes in each release, see Document History. Allowed Header Values Accept: application/json Accept-Encoding: gzip, deflate Content-Type: application/json Key Concepts Channel — Stores configuration data related to your live stream. You first create a channel and then use the channel’s stream key to start your live stream. Stream key — An identifier assigned by Amazon IVS when you create a channel, which is then used to authorize streaming. Treat the stream key like a secret, since it allows anyone to stream to the channel. Playback key pair — Video playback may be restricted using playback-authorization tokens, which use public-key encryption. A playback key pair is the public-private pair of keys used to sign and validate the playback-authorization token. Recording configuration — Stores configuration related to recording a live stream and where to store the recorded content. Multiple channels can reference the same recording configuration. Playback restriction policy — Restricts playback by countries and/or origin sites. For more information about your IVS live stream, also see Getting Started with IVS Low-Latency Streaming. A tag is a metadata label that you assign to an Amazon Web Services resource. A tag comprises a key and a value, both set by you. For example, you might set a tag as topic:nature to label a particular video category. See Best practices and strategies in Tagging Amazon Web Services Resources and Tag Editor for details, including restrictions that apply to tags and "Tag naming limits and requirements"; Amazon IVS has no service-specific constraints beyond what is documented there. Tags can help you identify and organize your Amazon Web Services resources. For example, you can use the same tag for different resources to indicate that they are related. You can also use tags to manage access (see Access Tags). The Amazon IVS API has these tag-related operations: TagResource, UntagResource, and ListTagsForResource. The following resources support tagging: Channels, Stream Keys, Playback Key Pairs, and Recording Configurations. At most 50 tags can be applied to a resource. Note the differences between these concepts: Authentication is about verifying identity. You need to be authenticated to sign Amazon IVS API requests. Authorization is about granting permissions. Your IAM roles need to have permissions for Amazon IVS API requests. In addition, authorization is needed to view Amazon IVS private channels. (Private channels are channels that are enabled for "playback authorization.") All Amazon IVS API requests must be authenticated with a signature. The Amazon Web Services Command-Line Interface (CLI) and Amazon IVS Player SDKs take care of signing the underlying API calls for you. However, if your application calls the Amazon IVS API directly, it’s your responsibility to sign the requests. You generate a signature using valid Amazon Web Services credentials that have permission to perform the requested action. For example, you must sign PutMetadata requests with a signature generated from a user account that has the ivs:PutMetadata permission. For more information: Authentication and generating signatures — See Authenticating Requests (Amazon Web Services Signature Version 4)in the Amazon Web Services General Reference. Managing Amazon IVS permissions — See Identity and Access Managementon the Security page of the Amazon IVS User Guide. Amazon Resource Names (ARNs) ARNs uniquely identify AWS resources. An ARN is required when you need to specify a resource unambiguously across all of AWS, such as in IAM policies and API calls. For more information, see Amazon Resource Namesin the AWS General Reference.
Package fees provides decred-specific methods for tracking and estimating fee rates for new transactions to be mined into the network. Fee rate estimation has two main goals: Although it was primarily written for dcrd, this package has intentionally been designed so it can be used as a standalone package for any projects needing the functionality provided. There are two main regimes against which fee estimation needs to be evaluated according to how full blocks being mined are (and consequently how important fee rates are): _low contention_ and _high contention_: In a low contention regime, the mempool sits mostly empty, transactions are usually mined very soon after being published and transaction fees are mostly sent using the minimum relay fee. In a high contention regime, the mempool is usually filled with unmined transactions, there is active dispute for space in a block (by transactions using higher fees) and blocks are usually full. The exact point of where these two regimes intersect is arbitrary, but it should be clear in the examples and simulations which of these is being discussed. Note: a very high contention scenario (> 90% of blocks being full and transactions remaining in the mempool indefinitely) is one in which stakeholders should be discussing alternative solutions (increase block size, provide other second layer alternatives, etc). Also, the current fill rate of blocks in decred is low, so while we try to account for this regime, I personally expect that the implementation will need more tweaks as it approaches this. The current approach to implement this estimation is based on bitcoin core's algorithm. References [1] and [2] provide a high level description of how it works there. Actual code is linked in references [3] and [4]. The algorithm is currently based in fee estimation as used in v0.14 of bitcoin core (which is also the basis for the v0.15+ method). A more comprehensive overview is available in reference [1]. This particular version was chosen because it's simpler to implement and should be sufficient for low contention regimes. It probably overestimates fees in higher contention regimes and longer target confirmation windows, but as pointed out earlier should be sufficient for current fill rate of decred's network. The basic algorithm is as follows (as executed by a single full node): Stats building stage: Estimation stage: Development of the estimator was originally performed and simulated using the code in [5]. Simulation of the current code can be performed by using the dcrfeesim tool available in [6]. Thanks to @davecgh for providing the initial review of the results and the original developers of the bitcoin core code (the brunt of which seems to have been made by @morcos). ## References [1] Introduction to Bitcoin Core Estimation: https://bitcointechtalk.com/an-introduction-to-bitcoin-core-fee-estimation-27920880ad0 [2] Proposed Changes to Fee Estimation in version 0.15: https://gist.github.com/morcos/d3637f015bc4e607e1fd10d8351e9f41 [3] Source for fee estimation in v0.14: https://github.com/bitcoin/bitcoin/blob/v0.14.2/src/policy/fees.cpp [4] Source for fee estimation in version 0.16.2: https://github.com/bitcoin/bitcoin/blob/v0.16.2/src/policy/fees.cpp [5] Source for the original dcrfeesim and estimator work: https://github.com/matheusd/dcrfeesim_dev [6] Source for the current dcrfeesim, using this module: https://github.com/matheusd/dcrfeesim
Package vpclattice provides the API client, operations, and parameter types for Amazon VPC Lattice. Amazon VPC Lattice is a fully managed application networking service that you use to connect, secure, and monitor all of your services across multiple accounts and virtual private clouds (VPCs). Amazon VPC Lattice interconnects your microservices and legacy services within a logical boundary, so that you can discover and manage them more efficiently. For more information, see the Amazon VPC Lattice User Guide
Package dnspod implements a client for the dnspod API. In order to use this package you will need a dnspod account and your API Token.
Open github, search for "git" The launcher lib comes with a lot of default switches (flags) to launch browser, this example shows how to add or delete switches. Useful when you want to customize the element query retry logic Rod provides a lot of debug options, you can use set methods to enable them or use environment variables list at "lib/defaults". Useful when rod doesn't have the function you want, you can call the cdp interface directly easily. Shows how to subscribe events. Request interception example to modify request or response. Such as you logged in your github account and you want to reuse the login session, you may want to launch the browser like this example. If a button is moving too fast, you cannot click it as a human, to perfectly simulate human inputs the click trigger by Rod are based on mouse point location, so usually you need wait a button is stable before you can click it. Some page interaction finishes after some network requests, WaitRequestIdle is designed for it.
Package metadata provides access to Google Compute Engine (GCE) metadata and API service accounts. This package is a wrapper around the GCE metadata service, as documented at https://developers.google.com/compute/docs/metadata.
Package dnspod implements a client for the dnspod API. In order to use this package you will need a dnspod account and your API Token.
Package ofxgo seeks to provide a library to make it easier to query and/or parse financial information with OFX from the comfort of Golang, without having to deal with marshalling/unmarshalling the SGML or XML. The library does *not* intend to abstract away all of the details of the OFX specification, which would be difficult to do well. Instead, it exposes the OFX SGML/XML hierarchy as structs which mostly resemble it. For more information on OFX and to read the specification, see http://ofx.net. There are three main top-level objects defined in ofxgo. These are Client, Request, and Response. The Request and Response objects represent OFX requests and responses as Golang structs. Client contains settings which control how requests and responses are marshalled and unmarshalled (the OFX version used, client id and version, whether to indent SGML/XML tags, etc.), and provides helper methods for making requests and optionally parsing the response using those settings. Every Request object contains a SignonRequest element, called Signon. This element contains the username, password (or key), and the ORG and FID fields particular to the financial institution being queried, and an optional ClientUID field (required by some FIs). Likewise, each Response contains a SignonResponse object which contains, among other things, the Status of the request. Any status with a nonzero Code should be inspected for a possible error (using the Severity and Message fields populated by the server, or the CodeMeaning() and CodeConditions() functions which return information about a particular code as specified by the OFX specification). Each top-level Request or Response object may contain zero or more messages, sorted into named slices by message set, just as the OFX specification groups them. Here are the supported types of Request/Response objects (along with the name of the slice of Messages they belong to in parentheses): Requests: Responses: When constructing a Request, simply append the desired message to the message set it belongs to. For Responses, it is the user's responsibility to make type assertions on objects found inside one of these message sets before using them. For example, the following code would request a bank statement for a checking account and print the balance: More usage examples may be found in the example command-line client provided with this library, in the cmd/ofx directory of the source.
Package loggregator provides clients to send data to the Loggregator v1 and v2 API. The v2 API distinguishes itself from the v1 API on three counts: 1) it uses gRPC, 2) it uses a streaming connection, and 3) it supports batching to improve performance. The code here provides a generic interface into the two APIs. Clients who prefer more fine grained control may generate their own code using the protobuf and gRPC service definitions found at: github.com/cloudfoundry/loggregator-api. Note that on account of the client using batching wherein multiple messages may be sent at once, there is no meaningful error return value available. Each of the methods below make a best-effort at message delivery. Even in the event of a failed send, the client will not block callers. In general, use IngressClient for communicating with Loggregator's v2 API. For Loggregator's v1 API, see v1/client.go.
This library implements a cron spec parser and runner. See the README for more details. Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates every 1 hour, 30 minutes, 10 seconds. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. All interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Package cloud is the root of the packages used to access Google Cloud Services. See https://godoc.org/cloud.google.com/go for a full list of sub-packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in this package for details. By default, all requests in sub-packages will run indefinitely, retrying on transient errors when correctness allows. To set timeouts or arrange for cancellation, use contexts. See the examples for details. Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP (bigquery, compute, storage, and translate) rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the default http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport. For gRPC clients (all others in this repo), connection pooling is configurable. Users of cloud client libraries may specify option.WithGRPCConnectionPool(n) as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and addressed in a round robin fashion. Minimal docker images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/GoogleCloudPlatform/google-cloud-go/issues/928 for more information. To see gRPC logs, set the environment variable GRPC_GO_LOG_SEVERITY_LEVEL. See https://godoc.org/google.golang.org/grpc/grpclog for more information. For HTTP logging, set the GODEBUG environment variable to "http2debug=1" or "http2debug=2". Google Application Default Credentials is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://developers.google.com/identity/protocols/application-default-credentials. To arrange for an RPC to be canceled, use context.WithCancel. You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.developers.google.com/permissions/serviceaccounts. This example uses the Datastore client, but the same steps apply to the other client libraries underneath this package. In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. The google package in this example is at golang.org/x/oauth2/google. This example uses the PubSub client, but the same steps apply to the other client libraries underneath this package. To set a timeout for an RPC, use context.WithTimeout.
Package detective provides the API client, operations, and parameter types for Amazon Detective. Detective uses machine learning and purpose-built visualizations to help you to analyze and investigate security issues across your Amazon Web Services (Amazon Web Services) workloads. Detective automatically extracts time-based events such as login attempts, API calls, and network traffic from CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow logs. It also extracts findings detected by Amazon GuardDuty. The Detective API primarily supports the creation and management of behavior graphs. A behavior graph contains the extracted data from a set of member accounts, and is created and managed by an administrator account. To add a member account to the behavior graph, the administrator account sends an invitation to the account. When the account accepts the invitation, it becomes a member account in the behavior graph. Detective is also integrated with Organizations. The organization management account designates the Detective administrator account for the organization. That account becomes the administrator account for the organization behavior graph. The Detective administrator account is also the delegated administrator account for Detective in Organizations. The Detective administrator account can enable any organization account as a member account in the organization behavior graph. The organization accounts do not receive invitations. The Detective administrator account can also invite other accounts to the organization behavior graph. Every behavior graph is specific to a Region. You can only use the API to manage behavior graphs that belong to the Region that is associated with the currently selected endpoint. The administrator account for a behavior graph can use the Detective API to do the following: Enable and disable Detective. Enabling Detective creates a new behavior graph. View the list of member accounts in a behavior graph. Add member accounts to a behavior graph. Remove member accounts from a behavior graph. Apply tags to a behavior graph. The organization management account can use the Detective API to select the delegated administrator for Detective. The Detective administrator account for an organization can use the Detective API to do the following: Perform all of the functions of an administrator account. Determine whether to automatically enable new organization accounts as member accounts in the organization behavior graph. An invited member account can use the Detective API to do the following: View the list of behavior graphs that they are invited to. Accept an invitation to contribute to a behavior graph. Decline an invitation to contribute to a behavior graph. Remove their account from a behavior graph. All API actions are logged as CloudTrail events. See Logging Detective API Calls with CloudTrail. We replaced the term "master account" with the term "administrator account". An administrator account is used to centrally manage multiple accounts. In the case of Detective, the administrator account manages the accounts in their behavior graph.
Package loggregator provides clients to send data to the Loggregator v1 and v2 API. The v2 API distinguishes itself from the v1 API on three counts: 1) it uses gRPC, 2) it uses a streaming connection, and 3) it supports batching to improve performance. The code here provides a generic interface into the two APIs. Clients who prefer more fine grained control may generate their own code using the protobuf and gRPC service definitions found at: github.com/cloudfoundry/loggregator-api. Note that on account of the client using batching wherein multiple messages may be sent at once, there is no meaningful error return value available. Each of the methods below make a best-effort at message delivery. Even in the event of a failed send, the client will not block callers. In general, use IngressClient for communicating with Loggregator's v2 API. For Loggregator's v1 API, see v1/client.go.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package fees provides decred-specific methods for tracking and estimating fee rates for new transactions to be mined into the network. Fee rate estimation has two main goals: Although it was primarily written for dcrd, this package has intentionally been designed so it can be used as a standalone package for any projects needing the functionality provided. There are two main regimes against which fee estimation needs to be evaluated according to how full blocks being mined are (and consequently how important fee rates are): _low contention_ and _high contention_: In a low contention regime, the mempool sits mostly empty, transactions are usually mined very soon after being published and transaction fees are mostly sent using the minimum relay fee. In a high contention regime, the mempool is usually filled with unmined transactions, there is active dispute for space in a block (by transactions using higher fees) and blocks are usually full. The exact point of where these two regimes intersect is arbitrary, but it should be clear in the examples and simulations which of these is being discussed. Note: a very high contention scenario (> 90% of blocks being full and transactions remaining in the mempool indefinitely) is one in which stakeholders should be discussing alternative solutions (increase block size, provide other second layer alternatives, etc). Also, the current fill rate of blocks in decred is low, so while we try to account for this regime, I personally expect that the implementation will need more tweaks as it approaches this. The current approach to implement this estimation is based on bitcoin core's algorithm. References [1] and [2] provide a high level description of how it works there. Actual code is linked in references [3] and [4]. The algorithm is currently based in fee estimation as used in v0.14 of bitcoin core (which is also the basis for the v0.15+ method). A more comprehensive overview is available in reference [1]. This particular version was chosen because it's simpler to implement and should be sufficient for low contention regimes. It probably overestimates fees in higher contention regimes and longer target confirmation windows, but as pointed out earlier should be sufficient for current fill rate of decred's network. The basic algorithm is as follows (as executed by a single full node): Stats building stage: Estimation stage: Development of the estimator was originally performed and simulated using the code in [5]. Simulation of the current code can be performed by using the dcrfeesim tool available in [6]. Thanks to @davecgh for providing the initial review of the results and the original developers of the bitcoin core code (the brunt of which seems to have been made by @morcos). ## References [1] Introduction to Bitcoin Core Estimation: https://bitcointechtalk.com/an-introduction-to-bitcoin-core-fee-estimation-27920880ad0 [2] Proposed Changes to Fee Estimation in version 0.15: https://gist.github.com/morcos/d3637f015bc4e607e1fd10d8351e9f41 [3] Source for fee estimation in v0.14: https://github.com/bitcoin/bitcoin/blob/v0.14.2/src/policy/fees.cpp [4] Source for fee estimation in version 0.16.2: https://github.com/bitcoin/bitcoin/blob/v0.16.2/src/policy/fees.cpp [5] Source for the original dcrfeesim and estimator work: https://github.com/matheusd/dcrfeesim_dev [6] Source for the current dcrfeesim, using this module: https://github.com/matheusd/dcrfeesim
Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates immediately, and then every 1 hour, 30 minutes, 10 seconds. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. All interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Package sdk is the official Flipt Go SDK. The SDK exposes the various RPC for interfacing with a remote Flipt instance. Both HTTP and gRPC protocols are supported via this unified Go API. The main entrypoint within this package is New, which takes an instance of Transport and returns an instance of SDK. Transport implementations can be found in the following sub-packages: The following is an example of creating and instance of the SDK using the gRPC transport. The following is an example of creating an instance of the SDK using the HTTP transport. The remote procedure calls mades by this SDK are authenticated via a ClientAuthenticationProvider implementation. This can be supplied to New via the WithAuthenticationProvider option. Note that each of these methods will only work if the target Flipt server instance has the authentication method enabled. Currently, there are three implementations: - StaticTokenAuthenticationProvider(https://www.flipt.io/docs/authentication/methods#static-token): This provider sets a static Flipt client token via the Authentication header with the Bearer scheme. - JWTAuthenticationProvider(https://www.flipt.io/docs/authentication/methods#json-web-tokens): This provider sets a pre-generated JSON web-token via the Authentication header with the JWT scheme. - KubernetesAuthenticationProvider(https://www.flipt.io/docs/authentication/methods#kubernetes): This automatically uses the service account token on the host and exchanges it with Flipt for a Flipt client token credential. The credential is then used to authenticate requests, again via the Authentication header and the Bearer scheme. It ensures that the client token is not-expired and requests fresh tokens automatically without intervention. Use this method to automatically authenticate your application with a Flipt deployed into the same Kubernetes cluster. The Flipt SDK is split into four sections Flipt, Auth, Meta, and Evaluation. Each of which provides access to different parts of the Flipt system. The Flipt service is the core Flipt API service. This service provides access to the Flipt resource CRUD APIs. Flipt resources can be accessed and managed directly. The Evaluation service provides access to the Flipt evaluation APIs. The Evaluation service provides three methods for evaluating a flag for a given entity: Boolean, Variant, and Batch. The Boolean method returns a response containing a boolean value indicating whether or not the flag is enabled for the given entity. Learn more about the Boolean flag type: <https://www.flipt.io/docs/concepts#boolean-flags> The Variant method returns a response containing the variant key for the given entity. Learn more about the Variant flag type: <https://www.flipt.io/docs/concepts#variant-flags> The Batch method returns a response containing the evaluation results for a batch of requests. These requests can be for a mix of boolean and variant flags.
The stathat package makes it easy to post any values to your StatHat account.
Package metadata provides access to Google Compute Engine (GCE) metadata and API service accounts. This package is a wrapper around the GCE metadata service, as documented at https://developers.google.com/compute/docs/metadata.
Package loggregator provides clients to send data to the Loggregator v1 and v2 API. The v2 API distinguishes itself from the v1 API on three counts: 1) it uses gRPC, 2) it uses a streaming connection, and 3) it supports batching to improve performance. The code here provides a generic interface into the two APIs. Clients who prefer more fine grained control may generate their own code using the protobuf and gRPC service definitions found at: github.com/cloudfoundry/loggregator-api. Note that on account of the client using batching wherein multiple messages may be sent at once, there is no meaningful error return value available. Each of the methods below make a best-effort at message delivery. Even in the event of a failed send, the client will not block callers. In general, use IngressClient for communicating with Loggregator's v2 API. For Loggregator's v1 API, see v1/client.go.
Package feegrant provides functionality for authorizing the payment of transaction fees from one account (key) to another account (key). Effectively, this allows for a user to pay fees using the balance of an account different from their own. Example use cases would be allowing a key on a device to pay for fees using a master wallet, or a third party service allowing users to pay for transactions without ever really holding their own coins. This package provides ways for specifying fee allowances such that authorizing fee payment to another account can be done with clear and safe restrictions. A user would authorize granting fee payment to another user using MsgGrantAllowance and revoke that delegation using MsgRevokeAllowance. In both cases, Granter is the one who is authorizing fee payment and Grantee is the one who is receiving the fee payment authorization. So grantee would correspond to the one who is signing a transaction and the granter would be the address that pays the fees. The fee allowance that a grantee receives is specified by an implementation of the FeeAllowance interface. Two FeeAllowance implementations are provided in this package: BasicAllowance and PeriodicAllowance. Package feegrant is a reverse proxy. It translates gRPC into RESTful JSON APIs.
Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates after 1 hour, 30 minutes, 10 seconds, and then every interval after that. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. All interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking: