Receiving harvestor errors and invalid CRI log format on filebeat

Hi,

I have an ELK setup with deamonset filebeat(7.14) as input from 31 kubernetes nodes moved to logstash and then to ES.

From couple of days we and are facing log drop and getting below logs in Filebeat.

<<<<<<<<<<

2023-03-09T10:35:21.010Z ERROR [reader_docker_json] readjson/docker_json.go:231 Parse line error: parsing CRI timestamp: parsing time e7. as 2006-01-02T15:04:05.999999999Z07:00: cannot parse e7. as 2006
2023-03-09T10:39:10.257Z ERROR [reader_docker_json] readjson/docker_json.go:231 Parse line error: invalid CRI log format

2023-03-07T08:27:26.750Z ERROR [reader_docker_json] readjson/docker_json.go:231 Parse line error: parsing CRI timestamp: parsing time "berrys_M-AHMDABD2," as "2006-01-02T15:04:05.999999999Z07:00": cannot parse "berrys_M-AHMDABD2," as "2006"
2023-03-07T08:28:53.094Z ERROR [input] log/input.go:550 Harvester could not be started on new file: /var/log/containers/jordan-6b56bdd679-ps824_default_jordan-b632638a85a59e1f2887a1fffa763c26a3cc9acd92bf47a437e0341ba67e9642.log, Err: error setting up harvester: Harvester setup failed. Unexpected file opening error: file info is not identical with opened file. Aborting harvesting and retrying file later again {"input_id": "75139dd5-83de-446e-81b1-5c4d05bb1005", "source": "/var/log/containers/jordan-6b56bdd679-ps824_default_jordan-b632638a85a59e1f2887a1fffa763c26a3cc9acd92bf47a437e0341ba67e9642.log", "state_id": "native::14422778-66304", "finished": false, "os_id": "14422778-66304"}
2023-03-07T08:29:10.998Z ERROR [input] log/input.go:550 Harvester could not be started on new file: /var/log/containers/alpha-5f774b84d9-xgvpw_default_alpha-b1242a34e1185a957ed88285f5e2bdfee627855b5f6a6fa787c115b948d0dd7d.log, Err: error setting up harvester: Harvester setup failed. Unexpected file opening error: file info is not identical with opened file. Aborting harvesting and retrying file later again {"input_id": "75139dd5-83de-446e-81b1-5c4d05bb1005", "source": "/var/log/containers/alpha-5f774b84d9-xgvpw_default_alpha-b1242a34e1185a957ed88285f5e2bdfee627855b5f6a6fa787c115b948d0dd7d.log", "state_id": "native::14549028-66304", "finished": false, "os_id": "14549028-66304"}

My filebeat configuration is
filebeat.inputs:

  • type: container
    paths:
    • /var/log/containers/*.log
      exclude_files: ['.kube-system.','.istio-system.','.kube-public.','.monitoring.','.kube-node-lease.','._default_istio-proxy-.']
      harvester_limit: 40000
      processors:
    • add_kubernetes_metadata:
      host: ${NODE_NAME}
      matchers:
      • logs_path:
        logs_path: "/var/log/containers/"
        processors:
    • drop_fields:
      fields: ["agent.ephemeral_id", "agent.hostname", "agent.id", "agent.type", "agent.version", "ecs.version", "input.type", "log.offset", "version", "kubernetes.labels.pod-template-hash", "kubernetes.pod.uid", "kubernetes.replicaset.name", "log.file.path", "log.offset", "kubernetes.node.name", "kubernetes.namespace", "kubernetes.labels.tier"]

setup.ilm.enabled: false
multiline.pattern: '^['
multiline.negate: true
multiline.match: after

output.logstash:
hosts:

Can someone help me resolve this as it is effecting my prod env.

Regards,
Narendra

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.