Getting Started
•
Migrating from Smart Agent
•
Migrating from Splunk Connect for Kubernetes
Configuration
•
Components
•
Monitoring
•
Security
•
Sizing
•
Troubleshooting
Splunk OpenTelemetry Collector for Kubernetes
The Splunk OpenTelemetry Collector for Kubernetes is a Helm chart for the Splunk Distribution
of OpenTelemetry Collector.
This chart creates a Kubernetes DaemonSet along with other Kubernetes objects
in a Kubernetes cluster and provides a unified way to receive, process and
export metric, trace, and log data for:
Current Status
- The Splunk OpenTelemetry Collector for Kubernetes Helm chart is production tested; it is in use by a number of customers in their production environments
- Customers using the helm chart can receive direct help from official Splunk support within SLA's
- Customers can use or migrate to the Splunk OpenTelemetry Collector for Kubernetes Helm chart without worrying about future breaking changes to its core configuration experience for metrics and traces collection (OpenTelemetry logs collection configuration is in beta). There may be breaking changes to the Collector's own metrics.
Installations that use this distribution can receive direct help from
Splunk's support teams. Customers are free to use the core OpenTelemetry OSS
components (several do!). We will provide best effort guidance for using these components;
however, only the Splunk distributions are in scope for official Splunk support and support-related SLAs.
This distribution currently supports:
Fluentd logs engine is now deprecated and will reach End Of Support in October 2025. Migrating to the native OTEL logs engine before this date is strongly recommended.
Supported Kubernetes distributions
The Helm chart works with default configurations of the main Kubernetes distributions. Use actively supported versions:
While this helm chart should work for other Kubernetes distributions, it may
require additional configurations applied to
values.yaml.
Getting Started
Prerequisites
The following prerequisites are required to use the helm chart:
- Helm 3
- Administrator access to your Kubernetes cluster and familiarity with your Kubernetes configuration. You must know where your log information is being collected in your Kubernetes deployment.
To send data to Splunk Enterprise or Splunk Cloud
-
Splunk Enterprise 8.0 or later.
-
A minimum of one Splunk platform index ready to collect the log data. This index will be used for ingesting logs.
-
An HTTP Event Collector (HEC) token and endpoint. See the following topics for more information:
To send data to Splunk Observability Cloud
Advanced Configuration
To fully configure the Helm chart, see the advanced
configuration.
How to install
In order to install Splunk OpenTelemetry Collector in a Kubernetes cluster, at
least one of the destinations (splunkPlatform
or splunkObservability
) has
to be configured.
For Splunk Enterprise/Cloud the following parameters are required:
For Splunk Observability Cloud the following parameters are required:
splunkObservability.realm
: Splunk realm to send telemetry data to.splunkObservability.accessToken
: Your Splunk Observability org access
token.
The following parameter is required or optional depending on the Kubernetes distribution:
clusterName
: arbitrary value that identifies your Kubernetes cluster. The value will be associated with every trace, metric and log as "k8s.cluster.name" attribute.
- Optional: If
distribution
is set to EKS, EKS/fargate, GKE, and GKE/autopilot. If clusterName
is specified it will overwrite detected value. - Required: For all other distributions.
Run the following commands, replacing the parameters above with their appropriate values.
Add Helm repo
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
Sending data to Splunk Observability Cloud
helm install my-splunk-otel-collector --set="splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
Sending data to Splunk Enterprise or Splunk Cloud
helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
Sending data to both Splunk Observability Cloud and Splunk Enterprise or Splunk Cloud
helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
You can specify a namespace to deploy the chart to with the -n
argument. Here is an example showing how to deploy in the otel
namespace:
helm -n otel install my-splunk-otel-collector -f values.yaml splunk-otel-collector-chart/splunk-otel-collector
Instead of setting helm values as arguments a YAML file can be provided:
helm install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector
The examples directory contains examples of typical use cases with pre-rendered Kubernetes resource manifests for each example.
How to upgrade
Make sure you run helm repo update
before you upgrade
To upgrade a deployment follow the instructions for installing
but use upgrade
instead of install
, for example:
helm upgrade my-splunk-otel-collector --values my_values.yaml
How to uninstall
To uninstall/delete a deployment with name my-splunk-otel-collector
:
helm delete my-splunk-otel-collector
Advanced Configuration
To fully configure the Helm chart, see the advanced
configuration.
Auto-instrumentation
For setting up auto-instrumentation, see the auto-instrumentation-introduction.md.
Contributing
We welcome feedback and contributions from the community! Please see our (contribution guidelines) for more information on how to get involved.
License
Apache Software License version 2.0.
ℹ️ SignalFx was acquired by Splunk in October 2019. See Splunk SignalFx for more information.