
Security News
Security Community Slams MIT-linked Report Claiming AI Powers 80% of Ransomware
Experts push back on new claims about AI-driven ransomware, warning that hype and sponsored research are distorting how the threat is understood.
github.com/frzifus/opentelemetry-collector-contrib/receiver/hostmetricsreceiver
Advanced tools
| Status | |
|---|---|
| Stability | beta | 
| Supported pipeline types | metrics | 
| Distributions | core, contrib | 
The Host Metrics receiver generates metrics about the host system scraped from various sources. This is intended to be used when the collector is deployed as an agent.
The collection interval and the categories of metrics to be scraped can be configured:
hostmetrics:
  collection_interval: <duration> # default = 1m
  scrapers:
    <scraper1>:
    <scraper2>:
    ...
The available scrapers are:
| Scraper | Supported OSs | Description | 
|---|---|---|
| cpu | All except Mac[1] | CPU utilization metrics | 
| disk | All except Mac[1] | Disk I/O metrics | 
| load | All | CPU load metrics | 
| filesystem | All | File System utilization metrics | 
| memory | All | Memory utilization metrics | 
| network | All | Network interface I/O metrics & TCP connection metrics | 
| paging | All | Paging/Swap space utilization and I/O metrics | 
| processes | Linux | Process count metrics | 
| process | Linux & Windows | Per process CPU, Memory, and Disk I/O metrics | 
[1] Not supported on Mac when compiled without cgo which is the default.
Several scrapers support additional configuration:
disk:
  <include|exclude>:
    devices: [ <device name>, ... ]
    match_type: <strict|regexp>
filesystem:
  <include_devices|exclude_devices>:
    devices: [ <device name>, ... ]
    match_type: <strict|regexp>
  <include_fs_types|exclude_fs_types>:
    fs_types: [ <filesystem type>, ... ]
    match_type: <strict|regexp>
  <include_mount_points|exclude_mount_points>:
    mount_points: [ <mount point>, ... ]
    match_type: <strict|regexp>
cpu_average specifies whether to divide the average load by the reported number of logical CPUs (default: false).
load:
  cpu_average: <false|true>
network:
  <include|exclude>:
    interfaces: [ <interface name>, ... ]
    match_type: <strict|regexp>
process:
  <include|exclude>:
    names: [ <process name>, ... ]
    match_type: <strict|regexp>
  mute_process_name_error: <true|false>
  scrape_process_delay: <time>
If you are only interested in a subset of metrics from a particular source, it is recommended you use this receiver with the Filter Processor.
If you would like to scrape some metrics at a different frequency than others,
you can configure multiple hostmetrics receivers with different
collection_interval values. For example:
receivers:
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu:
      memory:
  hostmetrics/disk:
    collection_interval: 1m
    scrapers:
      disk:
      filesystem:
service:
  pipelines:
    metrics:
      receivers: [hostmetrics, hostmetrics/disk]
Some host metrics reported are transitioning from being reported with a direction attribute to being reported with the
direction included in the metric name to adhere to the OpenTelemetry specification
(https://github.com/open-telemetry/opentelemetry-specification/pull/2617):
disk scraper metrics:
system.disk.io will become:
system.disk.io.readsystem.disk.io.writesystem.disk.operations will become:
system.disk.operations.readsystem.disk.operations.writesystem.disk.operation_time will become:
system.disk.operation_time.readsystem.disk.operation_time.writesystem.disk.merged will become:
system.disk.merged.readsystem.disk.merged.writenetwork scraper metrics:
system.network.dropped will become:
system.network.dropped.receivesystem.network.dropped.transmitsystem.network.errors will become:
system.network.errors.receivesystem.network.errors.transmitsystem.network.io will become:
system.network.io.receivesystem.network.io.transmitsystem.network.packets will become:
system.network.packets.receivesystem.network.packets.transmitpaging scraper metrics:
system.paging.operations will become:
system.paging.operations.page_insystem.paging.operations.page_outprocess scraper metrics:
process.disk.io will become:
process.disk.io.readprocess.disk.io.writeThe following feature gates control the transition process:
direction attribute are emitted by the receiver.direction
attribute are emitted by the receiver.disk scraper can emit the new metrics without the direction attribute if
feature gates enabled.receiver.hostmetricsreceiver.emitMetricsWithDirectionAttribute is enabled by default.receiver.hostmetricsreceiver.emitMetricsWithoutDirectionAttribute is disabled by default.direction attribute are deprecated with a warning.receiver.hostmetricsreceiver.emitMetricsWithDirectionAttribute is enabled by default.receiver.hostmetricsreceiver.emitMetricsWithoutDirectionAttribute is disabled by default.receiver.hostmetricsreceiver.emitMetricsWithDirectionAttribute is disabled by default.receiver.hostmetricsreceiver.emitMetricsWithoutDirectionAttribute is enabled by default.direction attribute are always emitted.direction attribute are no longer available.To enable the new metrics without direction attribute and disable the deprecated metrics, run OTel Collector with the
following arguments:
otelcol --feature-gates=-receiver.hostmetricsreceiver.emitMetricsWithDirectionAttribute,+receiver.hostmetricsreceiver.emitMetricsWithoutDirectionAttribute
It's also possible to emit both the deprecated and the new metrics:
otelcol --feature-gates=+receiver.hostmetricsreceiver.emitMetricsWithDirectionAttribute,+receiver.hostmetricsreceiver.emitMetricsWithoutDirectionAttribute
If both feature gates are enabled, each particular metric can be disabled with the user settings, for example:
receivers:
  hostmetrics:
    scrapers:
      paging:
        metrics:
          system.paging.operations:
            enabled: false
FAQs
Unknown package
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Experts push back on new claims about AI-driven ransomware, warning that hype and sponsored research are distorting how the threat is understood.

Security News
Ruby's creator Matz assumes control of RubyGems and Bundler repositories while former maintainers agree to step back and transfer all rights to end the dispute.

Research
/Security News
Socket researchers found 10 typosquatted npm packages that auto-run on install, show fake CAPTCHAs, fingerprint by IP, and deploy a credential stealer.