Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

github.com/open-telemetry/opentelemetry-collector-contrib/extension/observer/k8sobserver

Package Overview
Dependencies
Alerts
File Explorer
Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

github.com/open-telemetry/opentelemetry-collector-contrib/extension/observer/k8sobserver

  • v0.114.0
  • Source
  • Go
  • Socket score

Version published
Created
Source

Kubernetes Observer

Status
Stabilityalpha
Distributionscontrib, k8s
IssuesOpen issues Closed issues
Code Owners@dmitryax, @ChrsMark
Emeritus@rmfitzpatrick

The k8s_observer is a Receiver Creator-compatible "watch observer" that will detect and report Kubernetes pod, port, container, service, ingress and node endpoints via the Kubernetes API.

Example Config

extensions:
  k8s_observer:
    auth_type: serviceAccount
    node: ${env:K8S_NODE_NAME}
    observe_pods: true
    observe_nodes: true
    observe_services: true
    observe_ingresses: true

receivers:
  receiver_creator:
    watch_observers: [k8s_observer]
    receivers:
      redis:
        rule: type == "port" && pod.name matches "redis"
        config:
          password: '`pod.labels["SECRET"]`'
      kubeletstats:
        rule: type == "k8s.node"
        config:
          auth_type: serviceAccount
          collection_interval: 10s
          endpoint: "`endpoint`:`kubelet_endpoint_port`"
          extra_metadata_labels:
            - container.id
          metric_groups:
            - container
            - pod
            - node

The node field can be set to the node name to limit discovered endpoints. For example, its name value can be obtained using the downward API inside a Collector pod spec as follows:

env:
  - name: K8S_NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName

This spec-determined value would then be available via the ${env:K8S_NODE_NAME} usage in the observer configuration.

Config

All fields are optional.

NameTypeDefaultDocs
auth_typestringserviceAccountHow to authenticate to the K8s API server. This can be one of none (for no auth), serviceAccount (to use the standard service account token provided to the agent pod), or kubeConfig to use credentials from ~/.kube/config.
nodestringThe node name to limit the discovery of pod, port, and node endpoints. Providing no value (the default) results in discovering endpoints for all available nodes.
observe_podsbooltrueWhether to report observer pod and port endpoints. If true and node is specified it will only discover pod and port endpoints whose spec.nodeName matches the provided node name. If true and node isn't specified, it will discover all available pod and port endpoints. Please note that Collector connectivity to pods from other nodes is dependent on your cluster configuration and isn't guaranteed.
observe_nodesboolfalseWhether to report observer k8s.node endpoints. If true and node is specified it will only discover node endpoints whose metadata.name matches the provided node name. If true and node isn't specified, it will discover all available node endpoints. Please note that Collector connectivity to nodes is dependent on your cluster configuration and isn't guaranteed.
observe_servicesboolfalseWhether to report observer k8s.service endpoints.
observe_ingressesboolfalseWhether to report observer k8s.ingress endpoints.

More complete configuration examples on how to use this observer along with the receiver_creator, can be found at the Receiver Creator's documentation.

Setting up RBAC permissions

When using the serviceAccount auth_type, the service account of the pod running the agent needs to have the required permissions to read the K8s resources it should observe (i.e. pods, nodes, services and ingresses). Therefore, the service account running the pod needs to have the required ClusterRole which grants it the permission to read those resources from the Kubernetes API. Below is an example of how to set this up:

  1. Create a ServiceAccount that the collector should use.
<<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: otelcontribcol
  name: otelcontribcol
EOF
  1. Create a ClusterRole/ClusterRoleBinding that grants permission to read pods, nodes, services and ingresses.

Note: If you do not plan to observe all of these resources (e.g. if you are only interested in services) it is recommended to remove the resources you do not intend to observe from the configuration below:

<<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: otelcontribcol
  labels:
    app: otelcontribcol
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - services
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups: 
  - "networking.k8s.io"
  resources:
  - ingresses
  verbs:
  - get
  - watch
  - list
EOF
<<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otelcontribcol
  labels:
    app: otelcontribcol
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: otelcontribcol
subjects:
- kind: ServiceAccount
  name: otelcontribcol
  namespace: default
EOF
  1. Create a ConfigMap containing the configuration for the collector
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: otelcontribcol
  labels:
    app: otelcontribcol
data:
  config.yaml: |
    extensions:
      k8s_observer:
        auth_type: serviceAccount
        node: ${env:K8S_NODE_NAME}
        observe_pods: true
        observe_nodes: true
        observe_services: true
        observe_ingresses: true
    
    receivers:
      receiver_creator:
        watch_observers: [k8s_observer]
        receivers:
          redis:
            rule: type == "port" && pod.name matches "redis"
            config:
              password: '`pod.labels["SECRET"]`'
          kubeletstats:
            rule: type == "k8s.node"
            config:
              auth_type: serviceAccount
              collection_interval: 10s
              endpoint: "`endpoint`:`kubelet_endpoint_port`"
              extra_metadata_labels:
                - container.id
              metric_groups:
                - container
                - pod
                - node
    
    exporters:
      otlp:
        endpoint: <OTLP_ENDPOINT>

    service:
      pipelines:
        metrics:
          receivers: [receiver_creator]
          exporters: [otlp]
EOF
  1. Create the collector deployment, referring to the service account created earlier
<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otelcontribcol
  labels:
    app: otelcontribcol
spec:
  replicas: 1
  selector:
    matchLabels:
      app: otelcontribcol
  template:
    metadata:
      labels:
        app: otelcontribcol
    spec:
      serviceAccountName: otelcontribcol
      containers:
      - name: otelcontribcol
        # This image is created by running `make docker-otelcontribcol`.
        # If you are not building the collector locally, specify a published image: `otel/opentelemetry-collector-contrib`
        image: otelcontribcol:latest
        args: ["--config", "/etc/config/config.yaml"]
        volumeMounts:
        - name: config
          mountPath: /etc/config
        imagePullPolicy: IfNotPresent
      volumes:
        - name: config
          configMap:
            name: otelcontribcol
EOF

FAQs

Package last updated on 18 Nov 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc