gokaf is a robust in-memory pubsub engine meticulously crafted to provide seamless and nearly real-time data streams.
Package ecpush is a package for subscribing to real-time meteorological data feeds from Environment Canada. The main goal of ecpush is to provide a simple and lightweight client that can be used for receiving real-time data events directly from Environment Canada's meteorological product feed. The client can directly fetch the published products, or it can just provide a notification channel containing the product location (HTTP URL to Environment Canada's Datamart). The client has also been designed to automatically recover from any connection or channel interruptions. To create a new client, create a Client struct. The only required field is the Subtopics array. Default values for other fields are listed in the struct definition. An example configuration is shown below (subscribing text bulletins, citypage XML and CAP alert files). Please see https://github.com/MetPX/sarracenia/blob/master/doc/sr_subscribe.1.rst#subtopic-amqp-pattern-subtopic-need-to-be-set for formatting subtopics. Calling Connect(ctx) will return an error if no subtopics are provided. The function will block until the initial connection with the remote server is established. When the client is provisioned, an internal Goroutine is created to consume the feed. To consume the events, call Consume() on the client. This returns an Event and an indicator if the client is still actively consuming from the remote server. To close the client, call the cancel function on the context provided to the client. This will gracefully close the active channels and connection to the remote server. A fully functioning client can be found in the example directory. I would like to thank Sean Treadway for his Go RabbitMQ client package. I would also like to thank Environment Canada and the awesome people at Shared Services Canada for their developments and "openness" of MetPX and sarracenia. Copyright (c) 2019 Tanner Ryan. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file. Sean Treadway's Go RabbitMQ client package is under a BSD 2-clause license. Cenk Alti's Go exponential backoff package is under an MIT license. Once again, all rights reserved.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package relayr provides high level functionality for real-time web communication. It allows communication between front-end clients and back-end servers via easy-to-use constructs.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package spiker - real-time computing package
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package godnsbl lets you perform RBL (Real-time Blackhole List - https://en.wikipedia.org/wiki/DNSBL) lookups using Golang JSON annotations on the types are provided as a convenience.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Banshee is a real-time anomalies or outliers detection system for periodic metrics.
Package centrifuge is a real-time messaging library that abstracts several bidirectional transports (Websocket, SockJS) and provides primitives to build real-time applications with Go. It's also used as core of Centrifugo server. The API of this library is almost all goroutine-safe except cases where one-time operations like setting callback handlers performed.
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Banshee is a real-time anomalies or outliers detection system for periodic metrics.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.5. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. Via fabric(http://www.fabfile.org/): See deploy.py docs for more. Just pull the latest code: Note that the admin storage sqlite3 schema will be auto-migrated. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package gorbl lets you perform RBL (Real-time Blackhole List - https://en.wikipedia.org/wiki/DNSBL) lookups using Golang This package takes inspiration from a similar module that I wrote in Python (https://github.com/polera/rblwatch). gorbl takes a simpler approach: Basic lookup capability is provided by the lib. Unlike in rblwatch, concurrent lookups and the lists to search are left to those using the lib. JSON annotations on the types are provided as a convenience.
Package gafka is a full ecosystem built for real-time cloud PubSub system.
Package ecpush is a library for subscribing to real-time meteorological data feeds from Environment Canada. The main goal of ecpush is to provide a simple and lightweight client that can be used for receiving real-time data events directly from Environment Canada's meteorological product feed. The client can directly fetch the published products, or it can just provide a notification channel containing the product location (HTTP URL to Environment Canada's Datamart). The client has also been designed to automatically recover from any connection or channel interruptions. The interface is very minimal. To create a new client, simply create a Client struct. The only required field in the struct is the Subtopics array. Default values for the other fields are listed in the struct definition. An example configuration is shown below (subscribing to text bulletins, citypage XML and CAP alert files). Please see https://github.com/MetPX/sarracenia/blob/master/doc/sr_subscribe.1.rst#subtopic-amqp-pattern-subtopic-need-to-be-set for formatting subtopics. When calling `Connect()` on the newly created client, two channels will be returned. To receive the Events, create a goroutine to range over the Event channel. The done channel may be used to block the goroutine. To close the ecpush Client, simply call Close() on the client. This will close all active channels and connections to the messaging broker. It will also signal the done channel which will close the holding goroutine previously created above. A fully functioning client can be found in the example directory. I would like to thank Sean Treadway for his Go RabbitMQ client library. I would also like to thank Environment Canada and the awesome people at Shared Services Canada for their developments and "openness" of MetPX and sarracenia. Copyright (c) 2019 Tanner Ryan. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file. Sean Treadway's Go RabbitMQ client library is under a BSD 2-clause license. Once again, all rights reserved.
Peach is a web server for multi-language, real-time synchronization and searchable documentation.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package esalert a simple framework for real-time alerts on data in Elasticsearch.
Package proton - A powerful platform for your real-time web applications
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package voiceid provides the API client, operations, and parameter types for Amazon Voice ID. Amazon Connect Voice ID provides real-time caller authentication and fraud risk detection, which make voice interactions in contact centers more secure and efficient.
`zkclient` is a encapsulation utility of zookeeper based on [go-zookeeper](github.com/samuel/go-zookeeper), supports the following features: ## Features: - auto reconnect - set/get/delete value - support string/json codec, and you can implement your own - real-time synchronize data from zookeeper to memory ## Modules - `Codec`: value encode/decode - `Watcher`: loop watch control - `Handler`: include `valueHandler` and `mapHandler`, set/get/delete value, handle event, synchronize value, trigger listener - `Listener`: include `ValueListener` and `ChildListener`, listen value updated/deleted ## API - `Sync*`: synchronize data into memory - `SyncWatch*`: synchronize data into memory, and listen value change - `Decode*`: set value into zookeeper - `Encode*`: get value from zookeeper
Package cooperate is an Operational Transformation library for real-time collaborative applications.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package hekaanom implements anomaly detection in time series data for the Heka data processing tool (hekad.readthedocs.org). It does this by providing a filter plugin that contains set of simple tools for turning streamed data into suitable time series, an anomaly detection algorithm, and tools for post-processing the anomalies. Detected anomalies are then injected back in to the Heka pipeline as messages. Anomaly detection in this package is designed around three stages: making incoming data into regular time series (referred to here as "windowing"), detecting whether or not data points in the resulting time series are anomalous ("detecting"), and gathering consecutive (or roughly consecutive) anomalous data points together into anomalous events ("gathering"). Internally, each stage also has an associated data structure. Respectively, these are: window, ruling (i.e. each point is "ruled" anomalous or non-anomalous), and span (i.e. this time span is anomalous). Rulings and spans are injected into the Heka pipeline for use by subsequent stages with the types "anom.ruling" and "anom.span". The plugin itself has a general configuration section in the Heka configuration file, and each stage also has its own configuation section. Turning incoming data into time series consists of a number of steps. First, Heka's message matcher syntax (http://hekad.readthedocs.io/en/v0.10.0/message_matcher.html) is used to indicate which messages in the stream contain the intended data. Second, the time and value components of each data point are extracted from the filtered messages. The time component is pulled from the message's timestamp, while the `value_field`, specified in the plugin's configuration, is used as the data point's numeric value. Third, data points are split into independent time series based on the related message's `series_fields`. This is in place so that data pertaining to a number of time series may be fed into Heka at the same time. For example, if one wants to detect anomalies in the number of hourly visits to 10 different web pages, but all the page requests are coming in in real-time and are interspersed, the message field that contains the web page URL could be indiciated as the `series_field`, and a time series for each unique URL would be created. Finally, data points in each time series are bundled together into windows of regular and configurable "width". This is effectively downsampling to a regular interval. Right now, all value fields of the data points that fall within a window are added together to determine the window's value. Time series, which now consist of a sequence of windows, are passed on to the detect stage. The detect stage uses a configurable anomaly detection algorithm to to determine which windows are anomalous, and by how much. Right now, the only algorithm included in this package is Robust Primary Component Analysis ("RPCA"). The anomaly detection algorithm creates a ruling for every window is receives, and injects these rulings into Heka's message pipeline. The gather stage listens to the stream of rulings and gathers consecutive anomalous rulings together into anomalous events. Anomalous rulings need not be strictly consecutive; instead, a configurable parameter (`span_width`) can be used to indicate how many consecutive seconds of non-anomalous rulings must pass before an anomalous event is considered over. Like the windowing stage, the gather stage collects the numeric values of rulings (determined again by the `value_field`) together into a single value. Unlike the windowing stage, the gather stage can use a number of different operators when combining the constituent ruling values (outlined in the config struct documentation). The gather stage injects the generated anomalous spans into the Heka pipeline for any further processing or output the user might wish to perform.
Package gocent is a Go language API client for Centrifugo real-time messaging server. In example below we initialize new client with server URL address, project secret and request timeout. Then publish data into channel, call presence and history for channel and finally show how to publish several messages in one POST request to API endpoint using internal command buffer.
Package binarysocket is a real-time bidirectional binary socket library for the web. It offers a clean, robust and efficient way to connect webbrowsers with a go-backend in a simple way. It automatically detects supported socket layers and chooses the most suitable one. This library offers a net.Conn interface on the go-backend site and a similar net.Conn interface on the client javascript site. That's awesome, right? You already have a Go application using a TCP connection? Just drop in the BinarySocket package, fire up a HTTP server and that's it. No further adaptions are required in the backend. Instead of writing one backend which is responsible to communicate with web-application and another backend which communicates with other go programs, BinarySocket eliminates this duplication.
Package bugsnag captures errors in real-time and reports them to Bugsnag (http://bugsnag.com). Using bugsnag-go is a three-step process. 1. As early as possible in your program configure the notifier with your APIKey. This sets up handling of panics that would otherwise crash your app. 2. Add bugsnag to places that already catch panics. For example you should add it to the HTTP server when you call ListenAndServer: If that's not possible, for example because you're using Google App Engine, you can also wrap each HTTP handler manually: 3. To notify Bugsnag of an error that is not a panic, pass it to bugsnag.Notify. This will also log the error message using the configured Logger. For detailed integration instructions see https://bugsnag.com/docs/notifiers/go. The only required configuration is the Bugsnag API key which can be obtained by clicking "Settings" on the top of https://bugsnag.com/ after signing up. We also recommend you set the ReleaseStage and AppVersion if these make sense for your deployment workflow. If you need to attach extra data to Bugsnag notifications you can do that using the rawData mechanism. Most of the functions that send errors to Bugsnag allow you to pass in any number of interface{} values as rawData. The rawData can consist of the Severity, Context, User or MetaData types listed below, and there is also builtin support for *http.Requests. If you want to add custom tabs to your bugsnag dashboard you can pass any value in as rawData, and then process it into the event's metadata using a bugsnag.OnBeforeNotify() hook. If necessary you can pass Configuration in as rawData, or modify the Configuration object passed into OnBeforeNotify hooks. Configuration passed in this way only affects the current notification.
Package influxdb is the root package of InfluxDB, the scalable datastore for metrics, events, and real-time analytics. If you're looking for the Go HTTP client for InfluxDB, see package github.com/influxdata/influxdb/client/v2.
Package bugsnag captures errors in real-time and reports them to Bugsnag (http://bugsnag.com). Using bugsnag-go is a three-step process. 1. As early as possible in your program configure the notifier with your APIKey. This sets up handling of panics that would otherwise crash your app. 2. Add bugsnag to places that already catch panics. For example you should add it to the HTTP server when you call ListenAndServer: If that's not possible, for example because you're using Google App Engine, you can also wrap each HTTP handler manually: 3. To notify Bugsnag of an error that is not a panic, pass it to bugsnag.Notify. This will also log the error message using the configured Logger. For detailed integration instructions see https://bugsnag.com/docs/notifiers/go. The only required configuration is the Bugsnag API key which can be obtained by clicking "Settings" on the top of https://bugsnag.com/ after signing up. We also recommend you set the ReleaseStage and AppVersion if these make sense for your deployment workflow. If you need to attach extra data to Bugsnag notifications you can do that using the rawData mechanism. Most of the functions that send errors to Bugsnag allow you to pass in any number of interface{} values as rawData. The rawData can consist of the Severity, Context, User or MetaData types listed below, and there is also builtin support for *http.Requests. If you want to add custom tabs to your bugsnag dashboard you can pass any value in as rawData, and then process it into the event's metadata using a bugsnag.OnBeforeNotify() hook. If necessary you can pass Configuration in as rawData, or modify the Configuration object passed into OnBeforeNotify hooks. Configuration passed in this way only affects the current notification.
Package movingminmax provides an efficient O(1) moving minimum-maximum filter that can be used in real-time contexts. It uses the algorithm from: Daniel Lemire, Streaming Maximum-Minimum Filter Using No More than Three Comparisons per Element. Nordic Journal of Computing, 13 (4), pages 328-339, 2006. http://arxiv.org/abs/cs/0610046 This implementation uses a fixed amount of memory and makes no dynamic allocations during updates.
Package godnsbl lets you perform RBL (Real-time Blackhole List - https://en.wikipedia.org/wiki/DNSBL) lookups using Golang JSON annotations on the types are provided as a convenience.
Package gocent is a Go language API client for Centrifugo real-time messaging server. In example below we initialize new client with server URL address, project secret and request timeout. Then publish data into channel, call presence and history for channel and finally show how to publish several messages in one POST request to API endpoint using internal command buffer.
* Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git) * Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git) * Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git) * Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git) * Weather API * * # Introduction WeatherAPI.com provides access to weather and geo data via a JSON/XML restful API. It allows developers to create desktop, web and mobile applications using this data very easy. We provide following data through our API: - Real-time weather - 14 day weather forecast - Astronomy - Time zone - Location data - Search or Autocomplete API - NEW: Historical weather - NEW: Future Weather (Upto 300 days ahead) - Weather Alerts - Air Quality Data # Getting Started You need to [signup](https://www.weatherapi.com/signup.aspx) and then you can find your API key under [your account](https://www.weatherapi.com/login.aspx), and start using API right away! We have [code libraries](https://www.weatherapi.com/docs/code-libraries.aspx) for different programming languages like PHP, .net, JAVA, etc. If you find any features missing or have any suggestions, please [contact us](https://www.weatherapi.com/contact.aspx). # Authentication API access to the data is protected by an API key. If at anytime, you find the API key has become vulnerable, please regenerate the key using Regenerate button next to the API key. Authentication to the WeatherAPI.com API is provided by passing your API key as request parameter through an API . ## key parameter key=<YOUR API KEY> * * API version: 1.0.0-oas3 * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
Package bugsnag captures errors in real-time and reports them to Bugsnag (http://bugsnag.com). Using bugsnag-go is a three-step process. 1. As early as possible in your program configure the notifier with your APIKey. This sets up handling of panics that would otherwise crash your app. 2. Add bugsnag to places that already catch panics. For example you should add it to the HTTP server when you call ListenAndServer: If that's not possible, you can also wrap each HTTP handler manually: 3. To notify Bugsnag of an error that is not a panic, pass it to bugsnag.Notify. This will also log the error message using the configured Logger. For detailed integration instructions see https://bugsnag.com/docs/notifiers/go. The only required configuration is the Bugsnag API key which can be obtained by clicking "Settings" on the top of https://bugsnag.com/ after signing up. We also recommend you set the ReleaseStage, AppType, and AppVersion if these make sense for your deployment workflow. If you need to attach extra data to Bugsnag notifications you can do that using the rawData mechanism. Most of the functions that send errors to Bugsnag allow you to pass in any number of interface{} values as rawData. The rawData can consist of the Severity, Context, User or MetaData types listed below, and there is also builtin support for *http.Requests. If you want to add custom tabs to your bugsnag dashboard you can pass any value in as rawData, and then process it into the event's metadata using a bugsnag.OnBeforeNotify() hook. If necessary you can pass Configuration in as rawData, or modify the Configuration object passed into OnBeforeNotify hooks. Configuration passed in this way only affects the current notification.