Assorted Filebeat problems since upgrade to 6.6.2

I upgraded to 6.6.2, to get the multiline flag. Running on K8s as a daemonset, using autodiscover. Previously running, I think, 6.2.something.

Since then a new problem - the one I think I care most about - is that logs from a particular set of pods (Kafka running in pods named "confluent") are no longer collected.

Other problems, at least some of which pre-existed, appear to include pods getting killed, sometimes several times a day; in the past this has been because they apparently leaked memory and were OOMkilled.

On deleting and re-creating the daemonset large numbers of old log messages were ingested. Whether these were message that filebeat had failed to ingest over previous days, or whether they were duplicates of messages already ingested, I don't know.

Incomprehensible repeated error messages in the logs include the following (I think there are others but don't have any to hand):

ERROR kubernetes/watcher.go:258 kubernetes: Watching API error EOF

ERROR [autodiscover] cfgfile/list.go:96 Error creating runner from config: Can only start an input when all related states are finished: {Id:5913833-51713 Finished:false Fileinfo:0xc4342fb930 Source:/var/lib/docker/containers/b70e5b11370f6a1879181b7d44e1cc2f607d0d71e80439329eb1d3547bca6b43/b70e5b11370f6a1879181b7d44e1cc2f607d0d71e80439329eb1d3547bca6b43-json.log Offset:48380 Timestamp:2019-04-08 10:57:23.797817386 +0000 UTC m=+2077.123540778 TTL:-1ns Type:docker Meta:map FileStateOS:5913833-51713}

The part of the configuration for the logs that are no longer collected is as follows. An essentially similar configuration for nginx logs continues to work since the upgrade to 6.6.2.

        #   Kafka logs are in containers called "confluent"

        - condition:

            or:
              - equals:
                  kubernetes.container.name: confluent

          config:

            - type: docker
              containers.ids:
                - "${data.kubernetes.container.id}"

              processors:
                - add_kubernetes_metadata:
                    in_cluster: true

              fields_under_root: true

              fields:
                log_type: kafka_log

              #   For Kafka multiline handling it looks like anything that doesn't start with an open square bracket
              #   is a continuation line

              multiline:
                pattern: '^\['
                negate: true
                match: after

Note that I don't actually know where the Kafka logs get lost - in Filebeat, in Logstash or in Elasticsearch. All I know is that they stopped appearing coincident with upgrading Filebeat, and there is nothing relevant that I can see in any of the Filebeat, Logstash or Elasticsearch logs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.