Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Drain3 is an online log template miner that can extract templates (clusters) from a stream of log messages in a timely manner. It employs a parse tree with fixed depth to guide the log group search process, which effectively avoids constructing a very deep and unbalanced tree.
Drain3 continuously learns on-the-fly and extracts log templates from raw log entries.
For the input:
connected to 10.0.0.1
connected to 192.168.0.1
Hex number 0xDEADBEAF
user davidoh logged in
user eranr logged in
Drain3 extracts the following templates:
ID=1 : size=2 : connected to <:IP:>
ID=2 : size=1 : Hex number <:HEX:>
ID=3 : size=2 : user <:*:> logged in
Full sample program output:
Starting Drain3 template miner
Checking for saved state
Saved state not found
Drain3 started with 'FILE' persistence
Starting training mode. Reading from std-in ('q' to finish)
> connected to 10.0.0.1
Saving state of 1 clusters with 1 messages, 528 bytes, reason: cluster_created (1)
{"change_type": "cluster_created", "cluster_id": 1, "cluster_size": 1, "template_mined": "connected to <:IP:>", "cluster_count": 1}
Parameters: [ExtractedParameter(value='10.0.0.1', mask_name='IP')]
> connected to 192.168.0.1
{"change_type": "none", "cluster_id": 1, "cluster_size": 2, "template_mined": "connected to <:IP:>", "cluster_count": 1}
Parameters: [ExtractedParameter(value='192.168.0.1', mask_name='IP')]
> Hex number 0xDEADBEAF
Saving state of 2 clusters with 3 messages, 584 bytes, reason: cluster_created (2)
{"change_type": "cluster_created", "cluster_id": 2, "cluster_size": 1, "template_mined": "Hex number <:HEX:>", "cluster_count": 2}
Parameters: [ExtractedParameter(value='0xDEADBEAF', mask_name='HEX')]
> user davidoh logged in
Saving state of 3 clusters with 4 messages, 648 bytes, reason: cluster_created (3)
{"change_type": "cluster_created", "cluster_id": 3, "cluster_size": 1, "template_mined": "user davidoh logged in", "cluster_count": 3}
Parameters: []
> user eranr logged in
Saving state of 3 clusters with 5 messages, 644 bytes, reason: cluster_template_changed (3)
{"change_type": "cluster_template_changed", "cluster_id": 3, "cluster_size": 2, "template_mined": "user <:*:> logged in", "cluster_count": 3}
Parameters: [ExtractedParameter(value='eranr', mask_name='*')]
> q
Training done. Mined clusters:
ID=1 : size=2 : connected to <:IP:>
ID=2 : size=1 : Hex number <:HEX:>
ID=3 : size=2 : user <:*:> logged in
This project is an upgrade of the original Drain project by LogPAI from Python 2.7 to Python 3.6 or later with additional features and bug-fixes.
Read more information about Drain from the following paper:
A Drain3 use case is presented in this blog post: Use open source Drain3 log-template mining project to monitor for network outages .
.ini
file or a configuration object.Although Drain3 can be ingested with full raw log message, template mining accuracy can be improved if you feed it with only the unstructured free-text portion of log messages, by first removing structured parts like timestamp, hostname. severity, etc.
The output is a dictionary with the following fields:
change_type
- indicates either if a new template was identified, an existing template was changed or message added
to an existing cluster.cluster_id
- Sequential ID of the cluster that the log belongs to.cluster_size
- The size (message count) of the cluster that the log belongs to.cluster_count
- Count clusters seen so far.template_mined
- the last template of above cluster_id.Drain3 is configured using configparser. By default, config
filename is drain3.ini
in working directory. It can also be configured passing
a TemplateMinerConfig object to the TemplateMiner
constructor.
Primary configuration parameters:
[DRAIN]/sim_th
- similarity threshold. if percentage of similar tokens for a log message is below this number, a new
log cluster will be created (default 0.4)[DRAIN]/depth
- max depth levels of log clusters. Minimum is 2. (default 4)[DRAIN]/max_children
- max number of children of an internal node (default 100)[DRAIN]/max_clusters
- max number of tracked clusters (unlimited by default). When this number is reached, model
starts replacing old clusters with a new ones according to the LRU cache eviction policy.[DRAIN]/extra_delimiters
- delimiters to apply when splitting log message into words (in addition to whitespace) (
default none). Format is a Python list e.g. ['_', ':']
.[MASKING]/masking
- parameters masking - in json format (default "")[MASKING]/mask_prefix
& [MASKING]/mask_suffix
- the wrapping of identified parameters in templates. By default, it
is <
and >
respectively.[SNAPSHOT]/snapshot_interval_minutes
- time interval for new snapshots (default 1)[SNAPSHOT]/compress_state
- whether to compress the state before saving it. This can be useful when using Kafka
persistence.This feature allows masking of specific variable parts in log message with keywords, prior to passing to Drain. A well-defined masking can improve template mining accuracy.
Template parameters that do not match any custom mask in the preliminary masking phase are replaced with <*>
by Drain
core.
Use a list of regular expressions in the configuration file with the format {'regex_pattern', 'mask_with'}
to set
custom masking.
For example, following masking instructions in drain3.ini
will mask IP addresses and integers:
[MASKING]
masking = [
{"regex_pattern":"((?<=[^A-Za-z0-9])|^)(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})((?=[^A-Za-z0-9])|$)", "mask_with": "IP"},
{"regex_pattern":"((?<=[^A-Za-z0-9])|^)([\\-\\+]?\\d+)((?=[^A-Za-z0-9])|$)", "mask_with": "NUM"},
]
]
The persistence feature saves and loads a snapshot of Drain3 state in a (compressed) json format. This feature adds restart resiliency to Drain allowing continuation of activity and maintain learned knowledge across restarts.
Drain3 state includes the search tree and all the clusters that were identified up until snapshot time.
The snapshot also persist number of log messages matched each cluster, and it's cluster_id
.
An example of a snapshot:
{
"clusters": [
{
"cluster_id": 1,
"log_template_tokens": [
"aa",
"aa",
"<*>"
],
"py/object": "drain3_core.LogCluster",
"size": 2
},
{
"cluster_id": 2,
"log_template_tokens": [
"My",
"IP",
"is",
"<IP>"
],
"py/object": "drain3_core.LogCluster",
"size": 1
}
]
}
This example snapshot persist two clusters with the templates:
["aa", "aa", "<*>"]
- occurs twice
["My", "IP", "is", "<IP>"]
- occurs once
Snapshots are created in the following events:
cluster_created
- in any new templatecluster_template_changed
- in any update of a templateperiodic
- after n minutes from the last snapshot. This is intended to save cluster sizes even if no new template
was identified.Drain3 currently supports the following persistence modes:
Kafka - The snapshot is saved in a dedicated topic used only for snapshots - the last message in this topic is the
last snapshot that will be loaded after restart. For Kafka persistence, you need to provide: topic_name
. You may
also provide other kwargs
that are supported by kafka.KafkaConsumer
and kafka.Producer
e.g bootstrap_servers
to change Kafka endpoint (default is localhost:9092
).
Redis - The snapshot is saved to a key in Redis database (contributed by @matabares).
File - The snapshot is saved to a file.
Memory - The snapshot is saved an in-memory object.
None - No persistence.
Drain3 persistence modes can be easily extended to another medium / database by inheriting the PersistenceHandler class.
In some use-cases, it is required to separate training and inference phases.
In training phase you should call template_miner.add_log_message(log_line)
. This will match log line against an
existing cluster (if similarity is above threshold) or create a new cluster. It may also change the template of an
existing cluster.
In inference mode you should call template_miner.match(log_line)
. This will match log line against previously learned
clusters only. No new clusters are created and templates of existing clusters are not changed. Match to existing cluster
has to be perfect, otherwise None
is returned. You can use persistence option to load previously trained clusters
before inference.
This feature limits the max memory used by the model. It is particularly important for large and possibly unbounded log
streams. This feature is controlled by the max_clusters
parameter, which sets the max number of clusters/templates
trarcked by the model. When the limit is reached, new templates start to replace the old ones according to the Least
Recently Used (LRU) eviction policy. This makes the model adapt quickly to the most recent templates in the log stream.
Drain3 supports retrieving an ordered list of variables in a log message, after its template was mined. Each parameter
is accompanied by the name of the mask that was matched, or *
for the catch-all mask.
Parameter extraction is performed by generating a regular expression that matches the template and then applying it on
the log message. When exact_matching
is enabled (by default), the generated regex included the regular expression
defined in relevant masking instructions. If there are multiple masking instructions with the same name, either match
can satisfy the regex. It is possible to disable exact matching so that every variable is matched against a
non-whitespace character sequence. This may improve performance on expanse of accuracy.
Parameter extraction regexes generated per template are cached by default, to improve performance. You can control cache
size with the MASKING/parameter_extraction_cache_capacity
configuration parameter.
Sample usage:
result = template_miner.add_log_message(log_line)
params = template_miner.extract_parameters(
result["template_mined"], log_line, exact_matching=True)
For the input "user johndoe logged in 11 minuts ago"
, the template would be:
"user <:*:> logged in <:NUM:> minuts ago"
... and the extracted parameters:
[
ExtractedParameter(value='johndoe', mask_name='*'),
ExtractedParameter(value='11', mask_name='NUM')
]
Drain3 is available from PyPI. To install use pip
:
pip3 install drain3
Note: If you decide to use Kafka or Redis persistence, you should install relevant client library explicitly, since it is declared as an extra (optional) dependency, by either:
pip3 install kafka-python
-- or --
pip3 install redis
In order to run the examples directly from the repository, you need to install dependencies. You can do that using * pipenv* by executing the following command (assuming pipenv already installed):
python3 -m pipenv sync
drain_stdin_demo
Run examples/drain_stdin_demo.py from the root folder of the repository by:
python3 -m pipenv run python -m examples.drain_stdin_demo
This example uses Drain3 on input from stdin and persist to either Kafka / file / no persistence.
Change persistence_type
variable in the example to change persistence mode.
Enter several log lines using the command line. Press q
to end online learn-and-match mode.
Next, demo goes to match (inference) only mode, in which no new clusters are trained and input is matched against
previously trained clusters only. Press q
again to finish execution.
drain_bigfile_demo
Run examples/drain_bigfile_demo from the root folder of the repository by:
python3 -m pipenv run python -m examples.drain_bigfile_demo
This example downloads a real-world log file (of an SSH server) and process all lines, then prints result clusters, prefix tree and performance statistics.
An example drain3.ini
file with masking instructions can be found in the examples folder as well.
Our project welcomes external contributions. Please refer to CONTRIBUTING.md for further details.
KeysView
.extract_parameters()
. The
function get_parameter_list()
is deprecated (Thanks to @Impelon).AbstractMaskingInstruction
as a base class for RegexMaskingInstruction
, allowing to introduce other
types of masking mechanisms.full_search_strategy
option in TemplateMiner.match()
and Drain.match()
. See more info at
Issue #48.TemplateMinerConfig.parametrize_numeric_tokens
max_node_depth
.depth
property name to a more descriptive name max_node_depth
as Drain always subtracts 2 of depth
argument value. Also added log_cluster_depth
property to reflect original value of depth argument (Breaking Change).depth
param to minimum sensible value of 3.Drain.print_tree()
Drain.print_tree()
max_clusters
is used (
thanks @StanislawSwierc).TemplateMiner.match()
function for fast matching against existing clusters only.TemplateMiner.get_parameter_list()
function to extract template parameters for raw log message (thanks to *
@cwyalpha*)<*>
, <NUM>
etc, you can select any wrapper prefix
or suffix by overriding TemplateMinerConfig.mask_prefix
and TemplateMinerConfig.mask_prefix
.ini
file is always read from same folder as source file in demos in tests (thanks @RobinMaas95)add_seq_to_prefix_tree
#28 (bug introduced at v0.9.1)id_to_cluster
dict are now persisted by jsonpickle as int
instead of str
to avoid keys type conversion on
load snapshot which caused some issues.setup.py
.TemplateMiner
using a configuration object (without .ini
file).print_tree()
to a file/stream.MemoryBufferPersistence
str
also for 1st level
(was int
before), for type consistency.max_clusters
option to limit the number of tracked clusters.extra_delimiters
configuration option to DrainKafkaPersistence
now accepts also bootstrap_servers
as kwargs.kafka-python
package instead of kafka
(newer).kwargs
in Kafka persistence handler.DEFAULT
section)FAQs
Persistent & streaming log template miner
We found that drain3 demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.