Error initializing publisher: 2 errors: open /etc/filebeat/cert.pem: no such file or directory

Hi,
When trying to deploy filebeat 7.6.2 by a Kubernetes 1.13 manifest file, containers are unable to spawn up correctly and they produce the following error:

2020-04-08T09:59:12.044Z        ERROR   instance/beat.go:933    Exiting: error initializing publisher: 2 errors: open /etc/filebeat/cert.pem: no such file or directory /etc/filebeat/cert.pem; open /etc/filebeat/ca.pem: no such file or directory reading /etc/filebeat/ca.pem
Exiting: error initializing publisher: 2 errors: open /etc/filebeat/cert.pem: no such file or directory /etc/filebeat/cert.pem; open /etc/filebeat/ca.pem: no such file or directory reading /etc/filebeat/ca.pem

We are deploying the filebeats through the kubectl apply -f filebeat76-kubernetes.yaml command as specified in the official filebeat deployment webpage. The pods appear successfully but as you can see from above error it appears they are not finding a SSL certificate.

Here the manifest file:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat76-config
  namespace: kube-system
  labels:
    k8s-app: filebeat76
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #      hints.default_config:
    #        type: container
    #        paths:
    #          - /var/log/containers/*${data.kubernetes.container.id}.log

    processors:
      - add_cloud_metadata:
      - add_host_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}']
      index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      ssl.enabled: false
      ssl.verification_mode: none
      tls.enabled: false
      path.data: /var/lig/beats76 #We are creating another data path for avoiding overalp with first beat
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat76
  namespace: kube-system
  labels:
    k8s-app: filebeat76
spec:
  selector:
    matchLabels:
      k8s-app: filebeat76
  template:
    metadata:
      labels:
        k8s-app: filebeat76
    spec:
      serviceAccountName: filebeat76
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat76
        image: docker.elastic.co/beats/filebeat:7.6.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "MYELKSERVERIP"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: certificate
          mountPath: /etc/certificate/
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
      - name: certificate
        secret:
          secretName: alphaclustersecret
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat76
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat76
  labels:
    k8s-app: filebeat76
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat76
  namespace: kube-system
  labels:
    k8s-app: filebeat76
---

As you can see, we tried to remove SSL requirement by the rows

ssl.enabled: false
ssl.verification_mode: none
tls.enabled: false

Why is the filebeat daemon searching for local certificate if we tell him that's not needed ?
We also generate a cert.pem and a key.pem files and converted them in a kubernetes secret which the pod seems is correctly using:

kubectl describe pod filebeat76-424mg --namespace=kube-system
Name:               filebeat76-424mg
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               kube-node05/10.54.59.108
Start Time:         Wed, 08 Apr 2020 09:38:16 +0000
Labels:             controller-revision-hash=57897749bc
                    k8s-app=filebeat76
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 THEKUBENODEIP
Controlled By:      DaemonSet/filebeat76
Containers:
  filebeat76:
    Container ID:  docker://731efe61be7a5b4797f1fdcc3063d4a7b19f9f04aa2c3ff422a0e76f0647e09c
    Image:         docker.elastic.co/beats/filebeat:7.6.2
    Image ID:      docker-pullable://docker.elastic.co/beats/filebeat@sha256:24211654fbe1ce3866583d7ae385feffbfaa77d4598d189fdec46111133811a9
    Port:          <none>
    Host Port:     <none>
    Args:
      -c
      /etc/filebeat.yml
      -e
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 08 Apr 2020 10:14:37 +0000
      Finished:     Wed, 08 Apr 2020 10:14:37 +0000
    Ready:          False
    Restart Count:  12
    Limits:
      memory:  200Mi
    Requests:
      cpu:     100m
      memory:  100Mi
    Environment:
      ELASTICSEARCH_HOST:      ELKSERVERIP
      ELASTICSEARCH_PORT:      9200
      ELASTICSEARCH_USERNAME:  elastic
      ELASTICSEARCH_PASSWORD:  changeme
      ELASTIC_CLOUD_ID:        
      ELASTIC_CLOUD_AUTH:      
      NODE_NAME:                (v1:spec.nodeName)
    Mounts:
      /etc/certificate/ from certificate (ro)
      /etc/filebeat.yml from config (ro)
      /usr/share/filebeat/data from data (rw)
      /var/lib/docker/containers from varlibdockercontainers (ro)
      /var/log from varlog (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from filebeat76-token-8jqn8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      filebeat-config
    Optional:  false
  varlibdockercontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers
    HostPathType:  
  varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
  data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/filebeat-data
    HostPathType:  DirectoryOrCreate
  certificate:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  alphaclustersecret
    Optional:    false
  filebeat76-token-8jqn8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  filebeat76-token-8jqn8
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason     Age                  From                  Message
  ----     ------     ----                 ----                  -------
  Normal   Scheduled  40m                  default-scheduler     Successfully assigned kube-system/filebeat76-424mg to kube-node05
  Normal   Pulled     38m (x5 over 40m)    kubelet, kube-node05  Container image "docker.elastic.co/beats/filebeat:7.6.2" already present on machine
  Normal   Created    38m (x5 over 40m)    kubelet, kube-node05  Created container
  Normal   Started    38m (x5 over 40m)    kubelet, kube-node05  Started container
  Warning  BackOff    12s (x184 over 40m)  kubelet, kube-node05  Back-off restarting failed container

Thank you in advance for support.

Turned out the secret was not loading the certificates correctly in the attached volume. Solved it by using ConfigMaps and specifying the certificates paths manually. Moreover, I made sure to explicitly attach cert.pem, key.pem and ca.pem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.