
Security News
CISA’s 2025 SBOM Guidance Adds Hashes, Licenses, Tool Metadata, and Context
CISA’s 2025 draft SBOM guidance adds new fields like hashes, licenses, and tool metadata to make software inventories more actionable.
microsoft-sentinel-logstash-output
Advanced tools
Microsoft Sentinel provides a new output plugin for Logstash. Use this output plugin to send any log via Logstash to the Microsoft Sentinel/Log Analytics workspace. This is done with the Log Analytics DCR-based API. You may send logs to custom or standard tables.
Plugin version: v1.2.0 Released on: 2024-02-23
This plugin is currently in development and is free to use. We welcome contributions from the open source community on this project, and we request and appreciate feedback from users.
Microsoft Sentinel provides Logstash output plugin to Log analytics workspace using DCR based logs API.
Microsoft Sentinel's Logstash output plugin supports the following versions
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/elastic.gpg >/dev/null
echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list >/dev/null
sudo apt-get update && sudo apt-get install logstash=1:8.8.1-1
To make sure Logstash isn't automatically updated to a newer version, make sure its package is on hold for automatic updates:
sudo apt-mark hold logstash
Please note that when using Logstash 8, it is recommended to disable ECS in the pipeline. For more information refer to Logstash documentation.
To install the microsoft-sentinel-logstash-output, you can make use of the published gem at rubygems.com:
sudo /usr/share/logstash/bin/logstash-plugin install microsoft-sentinel-logstash-output
If your machine doesn't has an active Internet connection, or you want to install the plugin manually, you can download the plugin files and perform an 'offline' installation. Logstash Offline Plugin Management instruction.
If you already have the plugin installed, you can check which version you have by running:
sudo /usr/share/logstash/bin/logstash-plugin list --verbose microsoft-sentinel-logstash-output
To create a sample file, follow the following steps:
output {
microsoft-sentinel-logstash-output {
create_sample_file => true
sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/var/log" for Linux.
}
}
Note: make sure that the path exists before creating the sample file. 2) Start Logstash. The plugin will collect up to 10 records to a sample. 3) The file named "sampleFile.json" in the configured path will be created once there are 10 events to sample or when the Logstash process exited gracefully. (for example: "c:\temp\sampleFile1648453501.json").
The following parameters are optional and should be used to create a sample file.
input {
generator {
lines => [ "This is a test log message"]
count => 10
}
}
output {
microsoft-sentinel-logstash-output {
create_sample_file => true
sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/var/log" for Linux.
}
}
[
{
"host": "logstashMachine",
"sequence": 0,
"message": "This is a test log message",
"ls_timestamp": "2022-10-29T13:19:28.116Z",
"ls_version": "1"
},
...
]
To configure Microsoft Sentinel Logstash plugin you first need to create the DCR-related resources. To create these resources, follow one of the following tutorials:
Use the tutorial from the previous section to retrieve the following attributes:
After retrieving the required values replace the output section of the Logstash configuration file created in the previous steps with the example below. Then, replace the strings in the brackets below with the corresponding values. Make sure you change the "create_sample_file" attribute to false.
Here is an example for the output plugin configuration section:
output {
microsoft-sentinel-logstash-output {
client_app_Id => "<enter your client_app_id value here>"
client_app_secret => "<enter your client_app_secret value here>"
tenant_id => "<enter your tenant id here>"
data_collection_endpoint => "<enter your DCE logsIngestion URI here>"
dcr_immutable_id => "<enter your DCR immutableId here>"
dcr_stream_name => "<enter your stream name here>"
create_sample_file=> false
sample_file_path => "c:\\temp"
}
}
managed_identity - Boolean, false by default. Set to true
if you'd whish to authenticate using a Managed Identity. Managed Identities provide a "passwordless" authentication solution. This means providing client_app_id
, client_app_secret
and tenant_id
is no longer requird. Learn more about using anaged Identities.
Using Managed Identities over app registrations is highly recommended!
If your machine resides outside of Azure, please make sure the machine is onboarded into Azure Arc. Learn more about Azure Arc
key_names – Array of strings, if you wish to send a subset of the columns to Log Analytics.
plugin_flush_interval – Number, 5 by default. Defines the maximal time difference (in seconds) between sending two messages to Log Analytics.
retransmission_time - Number, 10 by default. This will set the amount of time in seconds given for retransmitting messages once sending has failed.
compress_data - Boolean, false by default. When this field is true, the event data is compressed before using the API. Recommended for high throughput pipelines
proxy - String, Empty by default. Specify which proxy URL to use for API calls for all of the communications with Azure.
proxy_aad - String, Empty by default. Specify which proxy URL to use for API calls to the Microsoft Entra ID service. Overrides the proxy setting.
proxy_endpoint - String, Empty by default. Specify which proxy URL to use when sending log data to the endpoint. Overrides the proxy setting.
azure_cloud - String, Empty by default. Used to specify the name of the Azure cloud that is being used, AzureCloud is set as default. Available values are: AzureCloud, AzureChinaCloud and AzureUSGovernment.
Here is an example for the output plugin configuration section using a Managed Identity:
output {
microsoft-sentinel-logstash-output {
managed_identity => true
data_collection_endpoint => "<enter your DCE logsIngestion URI here>"
dcr_immutable_id => "<enter your DCR immutableId here>"
dcr_stream_name => "<enter your stream name here>"
}
}
Security notice: We recommend not to implicitly state client_app_Id, client_app_secret, tenant_id, data_collection_endpoint, and dcr_immutable_id in your Logstash configuration for security reasons. It is best to store this sensitive information in a Logstash KeyStore as described here- 'Secrets Keystore'
Here is an example configuration that parses Syslog incoming data into a custom stream named "Custom-MyTableRawData".
input {
beats {
port => "5044"
}
}
filter {
}
output {
microsoft-sentinel-logstash-output {
client_app_Id => "619c1731-15ca-4403-9c61-xxxxxxxxxxxx"
client_app_secret => "xxxxxxxxxxxxxxxx"
tenant_id => "72f988bf-86f1-41af-91ab-xxxxxxxxxxxx"
data_collection_endpoint => "https://my-customlogsv2-test-jz2a.eastus2-1.ingest.monitor.azure.com"
dcr_immutable_id => "dcr-xxxxxxxxxxxxxxxxac23b8978251433a"
dcr_stream_name => "Custom-MyTableRawData"
proxy_aad => "http://proxy.example.com"
}
}
input {
tcp {
port => "514"
type => syslog #optional, will effect log type in table
}
}
filter {
}
output {
microsoft-sentinel-logstash-output {
client_app_Id => "619c1731-15ca-4403-9c61-xxxxxxxxxxxx"
client_app_secret => "xxxxxxxxxxxxxxxx"
tenant_id => "72f988bf-86f1-41af-91ab-xxxxxxxxxxxx"
data_collection_endpoint => "https://my-customlogsv2-test-jz2a.eastus2-1.ingest.monitor.azure.com"
dcr_immutable_id => "dcr-xxxxxxxxxxxxxxxxac23b8978251433a"
dcr_stream_name => "Custom-MyTableRawData"
}
}
Advanced Configuration
input {
syslog {
port => 514
}
}
output {
microsoft-sentinel-logstash-output {
client_app_Id => "${CLIENT_APP_ID}"
client_app_secret => "${CLIENT_APP_SECRET}"
tenant_id => "${TENANT_ID}"
data_collection_endpoint => "${DATA_COLLECTION_ENDPOINT}"
dcr_immutable_id => "${DCR_IMMUTABLE_ID}"
dcr_stream_name => "Custom-MyTableRawData"
key_names => ['PRI','TIME_TAG','HOSTNAME','MSG']
}
}
Now you are able to run logstash with the example configuration and send mock data using the 'logger' command.
For example:
logger -p local4.warn --rfc3164 --tcp -t CEF "0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example" -P 514 -d -n 127.0.0.1
Which will produce this content in the sample file:
[
{
"logsource": "logstashMachine",
"facility": 20,
"severity_label": "Warning",
"severity": 4,
"timestamp": "Apr 7 08:26:04",
"program": "CEF:",
"host": "127.0.0.1",
"facility_label": "local4",
"priority": 164,
"message": "0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example",
"ls_timestamp": "2022-04-07T08:26:04.000Z",
"ls_version": "1"
}
]
FAQs
Unknown package
We found that microsoft-sentinel-logstash-output demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
CISA’s 2025 draft SBOM guidance adds new fields like hashes, licenses, and tool metadata to make software inventories more actionable.
Security News
A clarification on our recent research investigating 60 malicious Ruby gems.
Security News
ESLint now supports parallel linting with a new --concurrency flag, delivering major speed gains and closing a 10-year-old feature request.