Heartbeat container stuck in crashloopbackoff

Hey; i'm trying to deploy heartbeat on kubernetes using this configuration file file. as i deployed the file heartbeat pod is stuck in crashloopbackoff with exit code 126.
as i searched for the problem i think there's a resource problem but the node on which heartbeat is deployed is working fine.

here's the deployment configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: heartbeat
  namespace: kube-system
  labels:
    k8s-app: heartbeat
spec:
  selector:
    matchLabels:
      k8s-app: heartbeat
  template:
    metadata:
      labels:
        k8s-app: heartbeat
    spec:
      serviceAccountName: heartbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: heartbeat
        image: docker.elastic.co/beats/heartbeat:7.17.6
        args: [
          "-c", "/etc/heartbeat.yml",
          "-e",
        ]
        env:

        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 1000
        resources:
          limits:
            memory: 2000Mi
            cpu: 2500m
          requests:
            cpu: 2000m
            memory: 1536Mi 
        volumeMounts:
        - name: config
          mountPath: /etc/heartbeat.yml
          readOnly: true
          subPath: heartbeat.yml
        - name: data
          mountPath: /usr/share/heartbeat/data
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: heartbeat-deployment-config
      - name: data
        hostPath:
          path: /var/lib/heartbeat-data
          type: DirectoryOrCreate

as i'm trying to monitor kubernetes services and pods this is the default configmap used:

apiVersion: v1
kind: ConfigMap
metadata:
  name: heartbeat-deployment-config
  namespace: kube-system
  labels:
    k8s-app: heartbeat
data:
  heartbeat.yml: |-
    heartbeat.autodiscover:
     # Autodiscover pods
      providers:
        - type: kubernetes
          resource: pod
          scope: cluster
          node: ${NODE_NAME}
          hints.enabled: true
    
     # Autodiscover services
      providers:
        - type: kubernetes
          resource: service
          scope: cluster
          node: ${NODE_NAME}
          hints.enabled: true
    
      Autodiscover nodes
      providers:
        - type: kubernetes
          resource: node
          node: ${NODE_NAME}
          scope: cluster
          templates:
            # Example, check SSH port of all cluster nodes:
            - condition: ~
              config:
                - hosts:
                    - ${data.host}:22
                  name: ${data.kubernetes.node.name}
                  schedule: '@every 10s'
                  timeout: 5s
                  type: tcp

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['https://10.112.100.121:30883']
      username: "elastic"
      password: "*****************"
      ssl.verification_mode: none

any help would be appreciated.

What do your logs show?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.