Frequent restarts - kubernetes_metadata

Used the daemonset yaml at with following added to filebeat-prospector(Version 6.2.4 and tried 6.3 also)

  name: filebeat-prospectors
  namespace: kube-system
    k8s-app: filebeat "true"
  kubernetes.yml: |-
    - type: docker
      multiline.pattern: '^[[:digit:]]+'
      multiline.negate: true
      multiline.match: after
      exclude_lines: ['kafka']
      - "*"
        - add_kubernetes_metadata:
            in_cluster: true

One of the error message from previously failed filebeat container. :

[signal SIGSEGV: segmentation violation code=0x1 addr=0x158 pc=0x13dd6fa]

goroutine 147 [running]:*ContainerIndexer).GetIndexes(0xc420141d90, 0x0, 0xc420213230, 0xc420204120, 0xc420204060)
	/go/src/ +0x3a*Indexers).GetIndexes(0xc420213230, 0x0, 0x0, 0x0, 0x0)
	/go/src/ +0x230*kubernetesAnnotator).removePod(0xc4201541c0, 0x0)
	/go/src/ +0x3d*kubernetesAnnotator).(
	/go/src/ +0x34, 0xc424673e90)
	/go/src/ +0x78*kubernetesAnnotator).worker(0xc4201541c0)
	/go/src/ +0x26f
created by
	/go/src/ +0x9f7

Sometimes, we also saw the filebeat container consuming high CPU and the log file only has "\\" repeated over and over.

Any chance you could open a Github issue with this repo? Can you reproduce this reliably? If yes, please provide as many details about how to reproduce as possible in the Github issue. (yes, you already provided quite a bit of data, thanks for that).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.