Socket
Book a DemoInstallSign in
Socket

github.com/open-telemetry/opentelemetry-collector-contrib/receiver/filelogreceiver

Package Overview
Dependencies
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

github.com/open-telemetry/opentelemetry-collector-contrib/receiver/filelogreceiver

Source
Go
Version
v0.139.0
Version published
Created
Source

File Log Receiver

Status
Stabilitybeta: logs
Distributionscontrib, k8s
IssuesOpen issues Closed issues
Code coveragecodecov
Code Owners@andrzej-stencel | Seeking more code owners!
Emeritus@djaglowski

Tails and parses logs from files.

Configuration

FieldDefaultDescription
includerequiredA list of file glob patterns that match the file paths to be read.
exclude[]A list of file glob patterns to exclude from reading. This is applied against the paths matched by include.
exclude_older_thanExclude files whose modification time is older than the specified age.
start_atendAt startup, where to start reading logs from the file. Options are beginning or end.
multilineA multiline configuration block. See below for more details.
force_flush_period500msTime since last time new data was found in the file, after which a partial log at the end of the file may be emitted.
encodingutf-8The encoding of the file being read. See the list of supported encodings below for available options.
preserve_leading_whitespacesfalseWhether to preserve leading whitespaces.
preserve_trailing_whitespacesfalseWhether to preserve trailing whitespaces.
include_file_nametrueWhether to add the file name as the attribute log.file.name.
include_file_pathfalseWhether to add the file path as the attribute log.file.path.
include_file_name_resolvedfalseWhether to add the file name after symlinks resolution as the attribute log.file.name_resolved.
include_file_path_resolvedfalseWhether to add the file path after symlinks resolution as the attribute log.file.path_resolved.
include_file_owner_namefalseWhether to add the file owner name as the attribute log.file.owner.name. Not supported for windows.
include_file_owner_group_namefalseWhether to add the file group name as the attribute log.file.owner.group.name. Not supported for windows.
include_file_record_numberfalseWhether to add the record number in the file as the attribute log.file.record_number.
include_file_record_offsetfalseWhether to add the record offset in the file as the attribute log.file.record_offset
poll_interval200msThe duration between filesystem polls.
fingerprint_size1kbThe number of bytes with which to identify a file. The first bytes in the file are used as the fingerprint. Decreasing this value at any point will cause existing fingerprints to forgotten, meaning that all files will be read from the beginning (one time)
initial_buffer_size16KiBThe initial size of the to read buffer for headers and logs, the buffer will be grown as necessary. Larger values may lead to unnecessary large buffer allocations, and smaller values may lead to lots of copies while growing the buffer.
max_log_size1MiBThe maximum size of a log entry to read. A log entry will be truncated if it is larger than max_log_size. Protects against reading large amounts of data into memory.
max_concurrent_files1024The maximum number of log files from which logs will be read concurrently. If the number of files matched in the include pattern exceeds this number, then files will be processed in batches.
max_batches0Only applicable when files must be batched in order to respect max_concurrent_files. This value limits the number of batches that will be processed during a single poll interval. A value of 0 indicates no limit.
delete_after_readfalseIf true, each log file will be read and then immediately deleted. Requires that the filelog.allowFileDeletion feature gate is enabled. Must be false when start_at is set to end.
acquire_fs_lockfalseWhether to attempt to acquire a filesystem lock before reading a file (Unix only).
attributes{}A map of key: value pairs to add to the entry's attributes.
resource{}A map of key: value pairs to add to the entry's resource.
operators[]An array of operators. See below for more details.
storagenoneThe ID of a storage extension to be used to store file offsets. File offsets allow the receiver to pick up where it left off in the case of a collector restart. If no storage extension is used, the receiver will manage offsets in memory only.
headernilSpecifies options for parsing header metadata. Requires that the filelog.allowHeaderMetadataParsing feature gate is enabled. See below for details. Must not be set when start_at is set to end.
header.patternrequired for header metadata parsingA regex that matches every header line.
header.metadata_operatorsrequired for header metadata parsingA list of operators used to parse metadata from the header.
retry_on_failure.enabledfalseIf true, the receiver will pause reading a file and attempt to resend the current batch of logs if it encounters an error from downstream components.
retry_on_failure.initial_interval1sTime to wait after the first failure before retrying.
retry_on_failure.max_interval30sUpper bound on retry backoff interval. Once this value is reached the delay between consecutive retries will remain constant at the specified value.
retry_on_failure.max_elapsed_time5mMaximum amount of time (including retries) spent trying to send a logs batch to a downstream consumer. Once this value is reached, the data is discarded. Retrying never stops if set to 0.
ordering_criteria.regexRegular expression used for sorting, should contain a named capture groups that are to be used in regex_key.
ordering_criteria.group_byRegular expression used for grouping, which is done pre-sorting. Should contain a named capture groups.
ordering_criteria.top_n1The number of files to track when using file ordering. The top N files are tracked after applying the ordering criteria.
ordering_criteria.sort_by.regex_keyRegular expression named capture group defined in ordering_criteria.regex to use for sorting.
ordering_criteria.sort_by.sort_typeType of sorting to be performed (e.g., numeric, alphabetical, timestamp, mtime)
ordering_criteria.sort_by.locationRelevant if sort_type is set to timestamp. Defines the location of the timestamp of the file.
ordering_criteria.sort_by.formatRelevant if sort_type is set to timestamp. Defines the strptime format of the timestamp being sorted.
ordering_criteria.sort_by.ascendingSort direction
compressionIndicate the compression format of input files. If set accordingly, files will be read using a reader that uncompresses the file before scanning its content. Options are ``, gzip, or auto. auto auto-detects file compression type. Currently, gzip files are the only compressed files auto-detected, based on ".gz" filename extension. auto option is useful when ingesting a mix of compressed and uncompressed files with the same filelogreceiver.
polls_to_archive0This settings controls the number of poll cycles to store on disk, rather than being discarded. By default, the receiver will purge the record of readers that have existed for 3 generations. Refer archiving and polling for more details. Note: This feature is experimental.

Note that by default, no logs will be read from a file that is not actively being written to because start_at defaults to end.

Operators

Each operator performs a simple responsibility, such as parsing a timestamp or JSON. Chain together operators to process logs into a desired format.

  • Every operator has a type.
  • Every operator can be given a unique id. If you use the same type of operator more than once in a pipeline, you must specify an id. Otherwise, the id defaults to the value of type.
  • Operators will output to the next operator in the pipeline. The last operator in the pipeline will emit from the receiver. Optionally, the output parameter can be used to specify the id of another operator to which logs will be passed directly.
  • Only parsers and general purpose operators should be used.

Multiline configuration

If set, the multiline configuration block instructs the file_input operator to split log entries on a pattern other than newlines.

The multiline configuration block must contain exactly one of line_start_pattern or line_end_pattern. These are regex patterns that match either the beginning of a new log entry, or the end of a log entry.

The omit_pattern setting can be used to omit the start/end pattern from each entry.

Supported encodings

KeyDescription
nopNo encoding validation. Treats the file as a stream of raw bytes
utf-8UTF-8 encoding
utf-8-rawUTF-8 encoding without replacing invalid UTF-8 bytes
utf-16leUTF-16 encoding with little-endian byte order
utf-16beUTF-16 encoding with big-endian byte order
asciiASCII encoding
big5The Big5 Chinese character encoding

Other less common encodings are supported on a best-effort basis. See https://www.iana.org/assignments/character-sets/character-sets.xhtml for other encodings available.

Header Metadata Parsing

To enable header metadata parsing, the filelog.allowHeaderMetadataParsing feature gate must be set, and start_at must be beginning.

If set, the file input operator will attempt to read a header from the start of the file. Each header line must match the header.pattern pattern. Each line is emitted into a pipeline defined by header.metadata_operators. Any attributes on the resultant entry from the embedded pipeline will be merged with the attributes from previous lines (attribute collisions will be resolved with an upsert strategy). After all header lines are read, the final merged header attributes will be present on every log line that is emitted for the file.

The header lines are not emitted by the receiver.

Additional Terminology and Features

  • An entry is the base representation of log data as it moves through a pipeline. All operators either create, modify, or consume entries.
  • A field is used to reference values in an entry.
  • A common expression syntax is used in several operators. For example, expressions can be used to filter or route entries.

Parsers with Embedded Operations

Many parsers operators can be configured to embed certain followup operations such as timestamp and severity parsing. For more information, see complex parsers.

Time parameters

All time parameters must have the unit of time specified. e.g.: 200ms, 1s, 1m.

Log Rotation

File Log Receiver can read files that are being rotated.

Example - Tailing a simple json file

Receiver Configuration

receivers:
  filelog:
    include: [ /var/log/myservice/*.json ]
    operators:
      - type: json_parser
        timestamp:
          parse_from: attributes.time
          layout: '%Y-%m-%d %H:%M:%S'

Example - Tailing a plaintext file

Receiver Configuration

receivers:
  filelog:
    include: [ /simple.log ]
    operators:
      - type: regex_parser
        regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)$'
        timestamp:
          parse_from: attributes.time
          layout: '%Y-%m-%d %H:%M:%S'
        severity:
          parse_from: attributes.sev

The above configuration will read logs from the "simple.log" file. Some examples of logs that it will read:

2023-06-19 05:20:50 ERROR This is a test error message
2023-06-20 12:50:00 DEBUG This is a test debug message

Example - Multiline logs parsing

Receiver Configuration

receivers:
  filelog:
    include:
    - /var/log/example/multiline.log
    multiline:
      line_start_pattern: ^Exception

The above configuration will be able to parse multiline logs, splitting every time the ^Exception pattern is met.

Exception in thread 1 "main" java.lang.NullPointerException
        at com.example.myproject.Book.getTitle(Book.java:16)
        at com.example.myproject.Author.getBookTitles(Author.java:25)
        at com.example.myproject.Bootstrap.main(Bootstrap.java:14)
Exception in thread 2 "main" java.lang.NullPointerException
        at com.example.myproject.Book.getTitle(Book.java:16)
        at com.example.myproject.Author.getBookTitles(Author.java:25)
        at com.example.myproject.Bootstrap.main(Bootstrap.java:44)

Example - Reading compressed log files

Receiver Configuration

receivers:
  filelog:
    include:
    - /var/log/example/compressed.log.gz
    compression: gzip

The above configuration will be able to read gzip compressed log files by setting the compression option to gzip. When this option is set, all files ending with that suffix are scanned using a gzip reader that decompresses the file content before scanning through it. Please note that if the compressed file is expected to be updated, the additional compressed logs must be appended to the compressed file, rather than recompressing the whole content and overwriting the previous file.

Offset tracking

The storage setting allows you to define the proper storage extension for storing file offsets. While the storage parameter can ensure that log files are consumed accurately, it is possible that logs are dropped while moving downstream through other components in the collector. For additional resiliency, see Fault tolerant log collection example

Here is some of the information the file log receiver stores:

  • The number of files it is currently tracking (knownFiles).
  • For each file being tracked:
    • The fingerprint of the file (Fingerprint.first_bytes).
    • The byte offset from the start of the file, indicating the position in the file from where the file log receiver continues reading the file (Offset).
    • An arbitrary set of file attributes, such as the name of the file (FileAttributes).

Exactly how this information is serialized depends on the type of storage being used.

Archiving

If polls_to_archive setting is used in conjunction with storage setting, file offsets older than three poll cycles are stored on disk rather than being discarded. This feature enables the receiver to remember file for a longer period and also aims to use limited amount of memory.

This is useful when exclude_older_than setting is used and the user wants the receiver to remember offsets of files for longer period of times. This helps prevent duplication if a file is modified after the exclude_older_than duration has passed.

Note that if the polls_to_archive setting is used without specifying storage, the receiver will revert to the default behavior i.e. purge the record of readers that have existed for 3 generations.

Troubleshooting

Tracking symlinked files

If the receiver is being used to track a symlinked file and the symlink target is expected to change frequently, make sure to set the value of the poll_interval setting to something lower than the symlink update frequency.

Telemetry metrics

Enabling Collector metrics will also provide telemetry metrics for the state of the receiver's file consumption. Specifically, the otelcol_fileconsumer_open_files and otelcol_fileconsumer_reading_files metrics are provided.

Feature Gates

filelog.decompressFingerprint

When this feature gate is enabled, the fingerprint of compressed file is computed by first decompressing its data. Note, it is important to set compression to a non-empty value for it to work.

This can cause existing gzip files to be re-ingested because of changes in how fingerprints are computed.

Schedule for this feature gate is:

  • Introduce as Alpha (disabled by default) in v0.128.0
  • Move to Beta (enabled by default) in v0.133.0

FAQs

Package last updated on 03 Nov 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts