Killing container with id docker://filebeat:Need to kill Pod

Running filebeat v6.0.1 on kubernetes v1.9.8 the beats die after a few minutes. We don't have a quota set for this namespace:

---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    daemon-set: filebeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        daemon-set: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: filebeat
      imagePullSecrets:
      - name: kube-system-nexus
      volumes:
        - name: filebeat-volume
          configMap:
            name: filebeat-conf
            defaultMode: 0600
        - name: ca-volume
          configMap:
            name: ca-cert
            defaultMode: 0600
        - name: cert-volume
          configMap:
            name: client-cert
            defaultMode: 0600
        - name: key-volume
          secret:
            secretName: filebeat-key
            defaultMode: 0600
        - name: docker-volume
          hostPath:
            path: /var/lib/docker/containers
      hostAliases:
      - ip: XXXXXXXXXXXXXX
        hostnames:
        - XXXXXXXXXXXXXX
      - ip: XXXXXXXXXXXXXX
        hostnames:
        - XXXXXXXXXXXXXX
      containers:
      - name: filebeat
        image: XXXXXXXXXXXXXX:9082/kube-system/filebeat:6.0.1
        args: ["-c", "/etc/filebeat.yml", "-e"]
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: filebeat-volume
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: ca-volume
          mountPath: /etc/pki/beat/ca.pem
          readOnly: true
          subPath: ca.pem
        - name: cert-volume
          mountPath: /etc/pki/beat/client-crt.pem
          readOnly: true
          subPath: client-crt.pem
        - name: key-volume
          mountPath: /etc/pki/beat/client-key.pem
          readOnly: true
          subPath: client-key.pem
        - name: docker-volume
          mountPath: /var/lib/docker/containers
          readOnly: true

Pod events:

Events:
  Type    Reason                 Age   From                     Message
  ----    ------                 ----  ----                     -------
  Normal  SuccessfulMountVolume  1m    kubelet, XXXXXXXXXXXXXXXXXXXX  MountVolume.SetUp succeeded for volume "docker-volume"
  Normal  SuccessfulMountVolume  1m    kubelet, XXXXXXXXXXXXXXXXXXXX  MountVolume.SetUp succeeded for volume "cert-volume"
  Normal  SuccessfulMountVolume  1m    kubelet, XXXXXXXXXXXXXXXXXXXX  MountVolume.SetUp succeeded for volume "filebeat-volume"
  Normal  SuccessfulMountVolume  1m    kubelet, XXXXXXXXXXXXXXXXXXXX  MountVolume.SetUp succeeded for volume "ca-volume"
  Normal  SuccessfulMountVolume  1m    kubelet, XXXXXXXXXXXXXXXXXXXX  MountVolume.SetUp succeeded for volume "key-volume"
  Normal  SuccessfulMountVolume  1m    kubelet, XXXXXXXXXXXXXXXXXXXX  MountVolume.SetUp succeeded for volume "filebeat-token-54pvl"
  Normal  Pulled                 1m    kubelet, XXXXXXXXXXXXXXXXXXXX  Container image "XXXXXXXXXXXXXXXXXXXX:9082/kube-system/filebeat:6.0.1" already present on machine
  Normal  Created                1m    kubelet, XXXXXXXXXXXXXXXXXXXX  Created container
  Normal  Started                1m    kubelet, XXXXXXXXXXXXXXXXXXXX  Started container
  Normal  Killing                3s    kubelet, XXXXXXXXXXXXXXXXXXXX  Killing container with id docker://filebeat:Need to kill Pod

Pod log:

2018/06/28 13:12:15.490964 filebeat.go:311: INFO Stopping filebeat
2018/06/28 13:12:15.490998 crawler.go:105: INFO Stopping Crawler
2018/06/28 13:12:15.491008 crawler.go:115: INFO Stopping 1 prospectors
2018/06/28 13:12:15.491022 reload.go:223: INFO Dynamic config reloader stopped
2018/06/28 13:12:15.491039 prospector.go:137: INFO Prospector ticker stopped
2018/06/28 13:12:15.491055 prospector.go:159: INFO Stopping Prospector: 19304543930541122
2018/06/28 13:12:15.491218 harvester.go:228: INFO Reader was closed: /var/lib/docker/containers/dcfa0612eac1c84a5a66102311d95a6bbef17ceda761c618d989bbb9abaebf04/dcfa0612eac1c84a5a66102311d95a6bbef17ceda761c618d989bbb9abaebf04-json.log. Closing.
2018/06/28 13:12:15.492228 crawler.go:131: INFO Crawler stopped
2018/06/28 13:12:15.492245 registrar.go:210: INFO Stopping Registrar
2018/06/28 13:12:15.493050 registrar.go:165: INFO Ending Registrar
2018/06/28 13:12:15.494381 metrics.go:51: INFO Total non-zero values:  beat.memstats.gc_next=17902656 beat.memstats.memory_alloc=15237184 beat.memstats.memory_total=3092571792 filebeat.events.active=2 filebeat.events.added=205751 filebeat.events.done=205749 filebeat.harvester.closed=29 filebeat.harvester.open_files=0 filebeat.harvester.running=0 filebeat.harvester.started=29 libbeat.config.module.running=0 libbeat.config.reloads=1 libbeat.output.events.acked=18512 libbeat.output.events.batches=11 libbeat.output.events.total=18512 libbeat.output.read.bytes=6561 libbeat.output.type=logstash libbeat.output.write.bytes=6074828 libbeat.pipeline.clients=0 libbeat.pipeline.events.active=0 libbeat.pipeline.events.failed=1 libbeat.pipeline.events.filtered=187238 libbeat.pipeline.events.published=18512 libbeat.pipeline.events.retry=8192 libbeat.pipeline.events.total=205751 libbeat.pipeline.queue.acked=18512 registrar.states.current=29 registrar.states.update=205748 registrar.writes=84196
2018/06/28 13:12:15.494412 metrics.go:52: INFO Uptime: 1m22.271421676s
2018/06/28 13:12:15.494420 beat.go:268: INFO filebeat stopped.

Hi @sszabo,

This sounds like a bug in the manifests we fixed some weeks ago: https://github.com/elastic/beats/pull/7284

I guess you are using kubectl apply to deploy your manifests. If that's the case, probably removing all the lines containing:

kubernetes.io/cluster-service: "true"

would fix your issue.

Let us know if that helps!

Thank you very much, pods are running:

sszabo@tor976568e1 [/home/sszabo] $ kubectl -n kube-system get pods -l daemon-set=filebeat
NAME             READY     STATUS    RESTARTS   AGE
filebeat-22mvn   1/1       Running   0          12h
filebeat-6twts   1/1       Running   0          12h
filebeat-dff5f   1/1       Running   0          12h
filebeat-schbk   1/1       Running   0          12h

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.