Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
monasca-notification
Advanced tools
Reads alarms from Kafka and then notifies the customer using their configured notification method.
.. image:: https://governance.openstack.org/tc/badges/monasca-notification.svg :target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
This engine reads alarms from Kafka and then notifies the customer using the configured notification method. Multiple notification and retry engines can run in parallel, up to one per available Kafka partition. Zookeeper is used to negotiate access to the Kafka partitions whenever a new process joins or leaves the working set.
The notification engine generates notifications using the following steps:
The notification engine uses three Kafka topics:
A retry engine runs in parallel with the notification engine and gives any failed notification a configurable number of extra chances at success.
The retry engine generates notifications using the following steps:
The retry engine uses two Kafka topics:
When reading from the alarm topic, no committing is done. The committing is done only after processing. This allows the processing to continue even though some notifications can be slow. In the event of a catastrophic failure some notifications could be sent but the alarms have not yet been acknowledged. This is an acceptable failure mode, better to send a notification twice than not at all.
The general process when a major error is encountered is to exit the daemon which should allow the other processes to renegotiate access to the Kafka partitions. It is also assumed that the notification engine will be run by a process supervisor which will restart it in case of a failure. In this way, any errors which are not easy to recover from are automatically handled by the service restarting and the active daemon switching to another instance.
Though this should cover all errors, there is the risk that an alarm or a set of alarms can be processed and notifications are sent out multiple times. To minimize this risk a number of techniques are used:
oslo.config
is used for handling configuration options. A sample
configuration file etc/monasca/notification.conf.sample
can be
generated by running:
::
tox -e genconfig
To run the service using the default config file location
of /etc/monasca/notification.conf
:
::
monasca-notification
To run the service and explicitly specify the config file:
::
monasca-notification --config-file /etc/monasca/monasca-notification.conf
StatsD is incorporated into the daemon and will send all stats to the StatsD server launched by monasca-agent. Default host and port points to localhost:8125.
Counters
Timers
The following notification plugins are available:
The plugins can be configured via the Monasca Notification config file. In general you will need to follow these steps to enable a plugin:
The PagerDuty plugin supports the PagerDuty v1 Events API. The first step
is to configure
_ a service in PagerDuty which uses this API. Once
configured, the service will be assigned an integration key. This key should be
used as the ADDRESS
field when creating the notification type, for example:
::
monasca notification-create pd_notification pagerduty a30d5560c5ce4239a6f52a01a15850ca
The default settings for the plugin, including the v1 Events API URL should be sufficient to get started, but it is worth checking that the PagerDuty Events v1 API URL matches that provided in the example Monasca Notification config file.
Slack plugin
To use the Slack plugin you must first configure an incoming `webhook`_
for the Slack channel you wish to post notifications to. The notification can
then be created as follows:
::
monasca notification-create slack_notification slack https://hooks.slack.com/services/MY/SECRET/WEBHOOK/URL
Note that whilst it is also possible to use a token instead of a webhook,
this approach is now `deprecated`_.
By default the Slack notification will dump all available information into
the alert. For example, a notification may be posted to Slack which looks
like this:
::
{
"metrics":[
{
"dimensions":{
"hostname":"operator"
},
"id":null,
"name":"cpu.user_perc"
}
],
"alarm_id":"20a54a65-44b8-4ac9-a398-1f2d888827d2",
"state":"ALARM",
"alarm_timestamp":1556703552,
"tenant_id":"62f7a7a314904aa3ab137d569d6b4fde",
"old_state":"OK",
"alarm_description":"Dummy alarm",
"message":"Thresholds were exceeded for the sub-alarms: count(cpu.user_perc, deterministic) >= 1.0 with the values: [1.0]",
"alarm_definition_id":"78ce7b53-f7e6-4b51-88d0-cb741e7dc906",
"alarm_name":"dummy_alarm"
}
The format of the above message can be customised with a Jinja template. All fields
from the raw Slack message are available in the template. For example, you may
configure the plugin as follows:
::
[notification_types]
enabled = slack
[slack_notifier]
message_template = /etc/monasca/slack_template.j2
timeout = 10
ca_certs = /etc/ssl/certs/ca-bundle.crt
insecure = False
With the following contents of `/etc/monasca/slack_template.j2`:
::
{{ alarm_name }} has triggered on {% for item in metrics %}host {{ item.dimensions.hostname }}{% if not loop.last %}, {% endif %}{% endfor %}.
With this configuration, the raw Slack message above would be transformed
into:
::
dummy_alarm has triggered on host(s): operator.
Future Considerations
=====================
- More extensive load testing is needed:
- How fast is the mysql db? How much load do we put on it. Initially I
think it makes most sense to read notification details for each alarm
but eventually I may want to cache that info.
- How expensive are commits to Kafka for every message we read? Should
we commit every N messages?
- How efficient is the default Kafka consumer batch size?
- Currently we can get ~200 notifications per second per
NotificationEngine instance using webhooks to a local http server. Is
that fast enough?
- Are we putting too much load on Kafka at ~200 commits per second?
.. _webhook: https://api.slack.com/incoming-webhooks
.. _deprecated: https://api.slack.com/custom-integrations/legacy-tokens
.. _configure: https://support.pagerduty.com/docs/services-and-integrations#section-events-api-v1
FAQs
Reads alarms from Kafka and then notifies the customer using their configured notification method.
We found that monasca-notification demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.