gokaf is a robust in-memory pubsub engine meticulously crafted to provide seamless and nearly real-time data streams.
Overview Monresql is a specialized library designed to facilitate efficient data replication, transfer, and synchronization from MongoDB to PostgreSQL databases. Inspired by similar tools like Moresql, Monresql focuses on unidirectional data movement, ensuring seamless integration and synchronization between MongoDB documents and PostgreSQL tables. Key Features Data Replication: Efficiently replicate data from MongoDB collections to corresponding PostgreSQL tables. Incremental Updates: Support for incremental updates to keep PostgreSQL data up-to-date without full data reloads. Performance Optimization: Optimizes data transfer processes for minimal latency and optimal resource utilization. API Reference LoadFieldsMap() Loads a mapping file to define how MongoDB documents should be mapped to PostgreSQL tables. ValidateOrCreatePostgresTable() Validates the existence of a PostgreSQL table to ensure it's ready for data replication. Replicate() Initiates the data replication process from MongoDB to PostgreSQL based on the loaded mapping. Sync() Starts the synchronization process, ensuring that changes in MongoDB are reflected in PostgreSQL in real-time and also save the marker to sync from the last stopped mark if the service stopped NewSyncOptions() NewSyncOptions will return the pointer of the syncoptions struct with default values of &syncOptions{checkpoint: true, checkPointPeriod: time.Minute * 1, lastEpoch: 0, reportPeriod: time.Minute * 1} then you can edit and change the values by set methods
Package ecpush is a package for subscribing to real-time meteorological data feeds from Environment Canada. The main goal of ecpush is to provide a simple and lightweight client that can be used for receiving real-time data events directly from Environment Canada's meteorological product feed. The client can directly fetch the published products, or it can just provide a notification channel containing the product location (HTTP URL to Environment Canada's Datamart). The client has also been designed to automatically recover from any connection or channel interruptions. To create a new client, create a Client struct. The only required field is the Subtopics array. Default values for other fields are listed in the struct definition. An example configuration is shown below (subscribing text bulletins, citypage XML and CAP alert files). Please see https://github.com/MetPX/sarracenia/blob/master/doc/sr_subscribe.1.rst#subtopic-amqp-pattern-subtopic-need-to-be-set for formatting subtopics. Calling Connect(ctx) will return an error if no subtopics are provided. The function will block until the initial connection with the remote server is established. When the client is provisioned, an internal Goroutine is created to consume the feed. To consume the events, call Consume() on the client. This returns an Event and an indicator if the client is still actively consuming from the remote server. To close the client, call the cancel function on the context provided to the client. This will gracefully close the active channels and connection to the remote server. A fully functioning client can be found in the example directory. I would like to thank Sean Treadway for his Go RabbitMQ client package. I would also like to thank Environment Canada and the awesome people at Shared Services Canada for their developments and "openness" of MetPX and sarracenia. Copyright (c) 2019 Tanner Ryan. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file. Sean Treadway's Go RabbitMQ client package is under a BSD 2-clause license. Cenk Alti's Go exponential backoff package is under an MIT license. Once again, all rights reserved.
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package elite provides real-time data from Elite Dangerous through files written to disk by the game.
Package movingminmax provides an efficient O(1) moving minimum-maximum filter that can be used in real-time contexts. It uses the algorithm from: Daniel Lemire, Streaming Maximum-Minimum Filter Using No More than Three Comparisons per Element. Nordic Journal of Computing, 13 (4), pages 328-339, 2006. http://arxiv.org/abs/cs/0610046 This implementation uses a fixed amount of memory and makes no dynamic allocations during updates.
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package binarysocket is a real-time bidirectional binary socket library for the web. It offers a clean, robust and efficient way to connect webbrowsers with a go-backend in a simple way. It automatically detects supported socket layers and chooses the most suitable one. This library offers a net.Conn interface on the go-backend site and a similar net.Conn interface on the client javascript site. That's awesome, right? You already have a Go application using a TCP connection? Just drop in the BinarySocket package, fire up a HTTP server and that's it. No further adaptions are required in the backend. Instead of writing one backend which is responsible to communicate with web-application and another backend which communicates with other go programs, BinarySocket eliminates this duplication.
Package gocent is a Go language API client for Centrifugo real-time messaging server. In example below we initialize new client with server URL address, project secret and request timeout. Then publish data into channel, call presence and history for channel and finally show how to publish several messages in one POST request to API endpoint using internal command buffer.
Package voiceid provides the API client, operations, and parameter types for Amazon Voice ID. Amazon Connect Voice ID provides real-time caller authentication and fraud risk detection, which make voice interactions in contact centers more secure and efficient.
Package srtp implements Secure Real-time Transport Protocol
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.5. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. Via fabric(http://www.fabfile.org/): See deploy.py docs for more. Just pull the latest code: Note that the admin storage sqlite3 schema will be auto-migrated. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package gocent is a Go language client for Centrifugo real-time messaging server HTTP API.
Package gocent is a Go language client for Centrifugo real-time messaging server HTTP API.
Package adp and its sub-packages implement the backend services to interact with multiple blockchains or other types of similar networks. adp provides you with two microservices: The wallet and explorer services communicate via a message broker. The user can request the explorer to monitor blockchain addresses or accounts channeling requests to the message broker. The explorer service consumes requests and monitors addresses. When an address is involved in a transaction, the explorer will send an event to the message broker. Wallet services can then listen to the broker to notify their users about these events in real-time. The message broker is implemented as a product agnostic layer (package lib/msg) and is configured via a JSON config file at service startup. Both wallet and explorer have their own database used for persistence. Each microservice's database can be standalone or shared by the microservices. It's layered implementation (package lib/store) provides a database product agnostic interface. A blockchain layer (package lib/block) is implemented so new blockchain interfaces can be developed and added. The layer provides basic functionality to request account balance, send and get transactions, etc. Both the wallet and explorer services will connect to the blockchains or networks indicated in the JSON config file provided at startup. Depending on workload and resources, one or more instances of the microservices can be orchestrated in order to provide the required service level to the users. The microservices can also be monitored via a Prometheus API by setting the flag "-m" at startup. The wallet microservice (package wallet) can be started running cmd/wallet/main.go or using Dockerfile.wallet. The wallet exposes an HTTP RESTful API that can be used by multiple clients. The API provides basic functionality to get the available networks, request account balances, set accounts for monitoring and send transactions to the blockchains. It also provides a hierarchical deterministic wallet (HD wallet) which comes quite handy in a single-user configuration. Transaction events sent by the explorer service are logged and can be read by clients. Any client front-end can also get the events by consuming the appropriate queues of the message broker. The explorer microservice (package explorer) can be started running cmd/explorer/main.go or using Dockerfile.explorer. The explorer scans mined blocks of the configured networks and sends transaction events to the message broker when an account or address being monitored is involved. Wallet services can send requests for the explorer to start or stop monitoring addresses so that real time eventing can be provided to the clients or front-end.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
* Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git) * Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git) * Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git) * Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git) * Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
Package gorbl lets you perform RBL (Real-time Blackhole List - https://en.wikipedia.org/wiki/DNSBL) lookups using Golang This package takes inspiration from a similar module that I wrote in Python (https://github.com/polera/rblwatch). gorbl takes a simpler approach: Basic lookup capability is provided by the lib. Unlike in rblwatch, concurrent lookups and the lists to search are left to those using the lib. JSON annotations on the types are provided as a convenience.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/ivopetiz/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package godnsbl lets you perform RBL lookups. RBL = Real-time Blackhole List (https://en.wikipedia.org/wiki/DNSBL).
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package srtp implements Secure Real-time Transport Protocol
Package godnsbl lets you perform RBL (Real-time Blackhole List - https://en.wikipedia.org/wiki/DNSBL) lookups using Golang JSON annotations on the types are provided as a convenience.
Package bugsnag captures errors in real-time and reports them to Bugsnag (http://bugsnag.com). Using bugsnag-go is a three-step process. 1. As early as possible in your program configure the notifier with your APIKey. This sets up handling of panics that would otherwise crash your app. 2. Add bugsnag to places that already catch panics. For example you should add it to the HTTP server when you call ListenAndServer: If that's not possible, you can also wrap each HTTP handler manually: 3. To notify Bugsnag of an error that is not a panic, pass it to bugsnag.Notify. This will also log the error message using the configured Logger. For detailed integration instructions see https://bugsnag.com/docs/notifiers/go. The only required configuration is the Bugsnag API key which can be obtained by clicking "Settings" on the top of https://bugsnag.com/ after signing up. We also recommend you set the ReleaseStage, AppType, and AppVersion if these make sense for your deployment workflow. If you need to attach extra data to Bugsnag notifications you can do that using the rawData mechanism. Most of the functions that send errors to Bugsnag allow you to pass in any number of interface{} values as rawData. The rawData can consist of the Severity, Context, User or MetaData types listed below, and there is also builtin support for *http.Requests. If you want to add custom tabs to your bugsnag dashboard you can pass any value in as rawData, and then process it into the event's metadata using a bugsnag.OnBeforeNotify() hook. If necessary you can pass Configuration in as rawData, or modify the Configuration object passed into OnBeforeNotify hooks. Configuration passed in this way only affects the current notification.
Package srtp implements Secure Real-time Transport Protocol
Package cowbull provides means to assemble a cows & bulls online real-time game.
Package metrik is a small library that takes the hassle out of creating HTTP/JSON APIs for real-time metrics.
Package gosoundio is a Go wrapper for libsoundio, cross-platform library for real-time audio input and output.
Package main is a stub for wr's command line interface, with the actual implementation in the cmd package. wr is a workflow runner. You use it to run the commands in your workflow easily, automatically, reliably, with repeatability, and while making optimal use of your available computing resources. wr is implemented as a polling-free in-memory job queue with an on-disk acid transactional embedded database, written in go. Its main benefits over other software workflow management systems are its very low latency and overhead, its high performance at scale, its real-time status updates with a view on all your workflows on one screen, its permanent searchable history of all the commands you have ever run, and its "live" dependencies enabling easy automation of on-going projects. Start up the manager daemon, which gives you a url you can view the web interface on: In addition to the "local" scheduler, which will run your commands on all available cores of the local machine, you can also have it run your commands on your LSF cluster or in your OpenStack environment (where it will scale the number of servers needed up and down automatically). Now, stick the commands you want to run in a text file and: Arbitrarily complex workflows can be formed by specifying command dependencies. Use the --help option of `wr add` for details. wr's core is implemented in the queue package. This is the in-memory job queue that holds commands that still need to be run. Its multiple sub-queues enable certain guarantees: a given command will only get run by a single client at any one time; if a client dies, the command will get run by another client instead; if a command cannot be run, it is buried until the user takes action; if a command has a dependency, it won't run until its dependencies are complete. The jobqueue package provides client+server code for interacting with the in-memory queue from the queue package, and by storing all new commands in an on-disk database, provides an additional guarantee: that (dynamic) workflows won't break because a job that was added got "lost" before it got run. It also retains all completed jobs, enabling searching through of past workflows and allowing for "live" dependencies, triggering the rerunning of previously completed commands if their dependencies change. The jobqueue package is also what actually does the main "work" of the system: the server component knows how many commands need to be run and what their resource requirements (memory, time, cpus etc.) are, and submits the appropriate number of jobqueue runner clients to the job scheduler. The jobqueue/scheduler package has the scheduler-specific code that ensures that these runner clients get run on the configured system in the most efficient way possible. Eg. for LSF, if we have 10 commands that need 2GB of memory to run, we will submit a job array of size 10 with 2GB of memory reservation to LSF. The most limited (and therefore potentially least contended) queue capable of running the commands will be chosen. For OpenStack, the cheapest server (in terms of cores and memory) that can run the commands will be spawned, and once there is no more work to do on those servers, they get terminated to free up resources. The cloud package implements methods for interacting with cloud environments such as OpenStack. The corresponding jobqueue/scheduler package uses these methods to do their work. The static subdirectory contains the html, css and javascript needed for the web interface. See jobqueue/serverWebI.go for how the web interface backend is implemented. The internal package contains general utility functions, and most notably config.go holds the code for how the command line interface deals with config options.
Package cloudwatchlogs provides the client and types for making API requests to Amazon CloudWatch Logs. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon EC2 instances, AWS CloudTrail, or other sources. You can then retrieve the associated log data from CloudWatch Logs using the CloudWatch console, CloudWatch Logs commands in the AWS CLI, CloudWatch Logs API, or CloudWatch Logs SDK. You can use CloudWatch Logs to: Monitor logs from EC2 instances in real-time: You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold that you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as "NullReferenceException") or count the number of occurrences of a literal term at a particular position in log data (such as "404" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. Monitor AWS CloudTrail logged events: You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail and use the notification to perform troubleshooting. Archive log data: You can use CloudWatch Logs to store your log data in highly durable storage. You can change the log retention setting so that any log events older than this setting are automatically deleted. The CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it. See https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28 for more information on this service. See cloudwatchlogs package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatchlogs/ To Amazon CloudWatch Logs with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon CloudWatch Logs client CloudWatchLogs for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatchlogs/#New
Package esalert a simple framework for real-time alerts on data in Elasticsearch.
Command slackbridge connects Slack channels to system I/O streams using Slack's real-time messaging API. Three modes of execution are supported: The first runs a child process and connects its standard streams to a Slack channel. Within the child process, the text of individual messages in the channel is received on stdin. Text emitted on stdout and stderr is sent back to the channel as individual messages. The second is similar to the first, but automatically starts a new child process for each Slack channel from which a message is received. The third connects to Slack and streams message text to stdout. Input is ignored. During its operation, slackbridge needs to convert Slack messages to and from plain text. When reading, individual messages are delimited by newlines. Multi-line messages are equivalent to multiple single-line messages in succession. This is not configurable. When writing, lines of output written within a 0.1 second interval are batched into a single Slack message. This is not configurable through the slackbridge CLI (though the underlying slackio package allows customization of this "batching" scheme). Users, reactions, threads, and other Slack features are not represented in any way. Only the text in the main body of the channel is available. Received messages are formatted per Slack's "Basic message formatting" as described at https://api.slack.com/docs/message-formatting. Sent messages should be formatted in this manner as well. slackbridge does not handle this automatically. Run "slackbridge help" to view full usage information. Before using slackbridge, the SLACK_TOKEN environment variable must be set to a valid Slack API token. slackbridge is designed for long-running programs. Extremely short programs (e.g. a single echo statement in exec mode) are not guaranteed to work as expected, and issues encountered with slackbridge while running such programs will not be considered bugs. Excessive runs of short programs with slackbridge will likely trigger Slack's rate limiting.
Package spiker - real-time computing package
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package srtp implements Secure Real-time Transport Protocol
Package godnsbl lets you perform RBL (Real-time Blackhole List - https://en.wikipedia.org/wiki/DNSBL) lookups using Golang JSON annotations on the types are provided as a convenience.
Package track provides a beep.Streamer with real-time stream insertion.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.4 and godep. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.