Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
sentinelblue-logstash-output-azure-loganalytics
Advanced tools
Sentinel Blue provides an updated output plugin for Logstash. Using this output plugin, you will be able to send any log you want using Logstash to the Azure Sentinel/Log Analytics workspace using dynamic custom table names.
This allows you to set your destination table in your filtering process and reference it in the output plugin. The original plugin functionality has been preserved as well.
Azure Sentinel output plugin uses the rest API integration to Log Analytics, in order to ingest the logs into custom logs tables What are custom logs tables
This plugin is based on the original provided by the Azure Sentinel team. View the original plugin here: https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-logstash-output-azure-loganalytics
Plugin version: v1.1.2.rc1
Released on: 2022-10-28
This plugin is currently in development and is free to use. We welcome contributions from the open source community on this project, and we request and appreciate feedback from users.
https://rubygems.org/gems/sentinelblue-logstash-output-azure-loganalytics
For issues regarding the output plugin please open a support issue here. Create a new issue describing the problem so that we can assist you.
Install the sentinelblue-logstash-output-azure-loganalytics, use Logstash Working with plugins document. For offline setup follow Logstash Offline Plugin Management instruction.
logstash-plugin install sentinelblue-logstash-output-azure-loganalytics
Required Logstash version: between 7.0+
in your Logstash configuration file, add the Azure Sentinel output plugin to the configuration with following values:
%{field_name}
, %{[nested][field]}
.Note: View the GitHub to learn more about the sent message’s configuration, performance settings and mechanism
Security notice: We recommend not to implicitly state the workspace_id and workspace_key in your Logstash configuration for security reasons. It is best to store this sensitive information in a Logstash KeyStore as described here- https://www.elastic.co/guide/en/elasticsearch/reference/current/get-started-logstash-user.html
Here is an example configuration who parse Syslog incoming data into a custom table named "logstashCustomTableName".
input {
beats {
port => "5044"
}
}
filter {
}
output {
sentinelblue-logstash-output-azure-loganalytics {
workspace_id => "4g5tad2b-a4u4-147v-a4r7-23148a5f2c21" # <your workspace id>
workspace_key => "u/saRtY0JGHJ4Ce93g5WQ3Lk50ZnZ8ugfd74nk78RPLPP/KgfnjU5478Ndh64sNfdrsMni975HJP6lp==" # <your workspace key>
custom_log_table_name => "tableName"
}
}
Or using the tcp input pipe
input {
tcp {
port => "514"
type => syslog #optional, will effect log type in table
}
}
filter {
}
output {
sentinelblue-logstash-output-azure-loganalytics {
workspace_id => "4g5tad2b-a4u4-147v-a4r7-23148a5f2c21" # <your workspace id>
workspace_key => "u/saRtY0JGHJ4Ce93g5WQ3Lk50ZnZ8ugfd74nk78RPLPP/KgfnjU5478Ndh64sNfdrsMni975HJP6lp==" # <your workspace key>
custom_log_table_name => "tableName"
}
}
input {
tcp {
port => 514
type => syslog
}
}
filter {
grok {
match => { "message" => "<%{NUMBER:PRI}>1 (?<TIME_TAG>[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}T[0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2})[^ ]* (?<HOSTNAME>[^ ]*) %{GREEDYDATA:MSG}" }
}
}
output {
sentinelblue-logstash-output-azure-loganalytics {
workspace_id => "<WS_ID>"
workspace_key => "${WS_KEY}"
custom_log_table_name => "logstashCustomTableName"
key_names => ['PRI','TIME_TAG','HOSTNAME','MSG']
plugin_flush_interval => 5
}
}
filter {
grok {
match => { "message" => "<%{NUMBER:PRI}>1 (?<TIME_TAG>[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}T[0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2})[^ ]* (?<HOSTNAME>[^ ]*) %{GREEDYDATA:MSG}" }
}
}
output {
sentinelblue-logstash-output-azure-loganalytics {
workspace_id => "<WS_ID>"
workspace_key => "${WS_KEY}"
custom_log_table_name => "%{[event][name]}"
key_names => ['PRI','TIME_TAG','HOSTNAME','MSG']
plugin_flush_interval => 5
}
}
Now you are able to run logstash with the example configuration and send mock data using the 'logger' command.
For example:
logger -p local4.warn -t CEF: "0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example" -P 514 -d -n 127.0.0.1
Note: this format of pushing logs is not tested. You can tail a file for similar results.
logger -p local4.warn -t JSON: "{"event":{"name":"logstashCustomTableName"},"purpose":"testplugin"}"
Alternativly you can use netcat to test your configuration:
echo "test string" | netcat localhost 514
FAQs
Unknown package
We found that sentinelblue-logstash-output-azure-loganalytics demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.