Parse line error: invalid CRI log format when registry offset points to middle of the log event

Filebeat:
Version: 8.12.1
Autodiscover enabled.

We observe at times filebeat reports an error
{"log.level":"error","@timestamp":"2024-08-13T10:25:42.693Z","log.logger":"reader_docker_json","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/reader/readjson.(*DockerJSONReader).Next","file.name":"readjson/docker_json.go","file.line":231},"message":"Parse line error: invalid CRI log format","service.name":"filebeat","ecs.version":"1.6.0"}

On further investigating we observe that the offset of the log file is pointing to the middle of the log event causing this issue.
Is it expected from filebeat to point offset to the middle of the log event.?
On which scenarios offset points to middle of the log event.

The offset of a file should never point to the middle of a line, that's really odd. The only case I can see this happening is if there is some sort of inode reuse, so Filebeat is using the same registry/offset for two different files. The most common situation for this scenario is an environment with a high rate of files being created/deleted, Kubernetes is one of the most common cases.

@sasikiranvaddi could you share the whole configuration you're using (don't forget to redact any sensitive information like credentials, URLs, etc)?

Could you also provide debug logs and the registry?

Could you give more information about your environment? Is it Kubernetes? What is the distro and FS you're using, do you have any idea of the throughput/log rotation?

Aside from that, have you noticed:

  • Data duplication
  • Missing data
  • Filebeat logging about file truncation when files were not truncated?

Hi @TiagoQueiroz
Thank you for acknowledging the ticket. The system has been upgraded, here is info which I could get it from the env#.

The filebeat logs are collected for an interval of 5min as close_timeout is configured as 5min.
Unfortunately we couldn't collect the logs with debug enabled, as reporter has upgraded the system.

The "Parse line error" has obtained for the following pod
/var/log/pods/cbev_xxxx-tm-ingress-controller-cr-trf-envoy-xtzrz_a5ed5ae0-5148-4b40-9d7f-7029ee0433ce/initconfig/1.log
Where the offset is pointing to middle of the log event. i.e., till "version":

Log Producer log reference:
2024-08-07T19:52:56.570922597Z stdout F {"message":"msg="maxprocs: Updating GOMAXPROCS=1: using minimum allowed GOMAXPROCS"","metadata":{"container_name":"initconfig","namespace":"xxxx","pod_name":"xxxx-tm-ingress-controller-cr-trf-envoy-xtzrz"},"service_id":"xxxx-tm-ingress-controller-cr-trf","severity":"info","timestamp":"2024-08-07T19:52:56.000+00:00","version":"1.2.0"}

Registry Ref:
{"k":"filebeat::logs::native::529530981-66305","v":{"id":"native::529530981-66305","identifier_name":"native","type":"container","FileStateOS":{"inode":529530981,"device":66305},"prev_id":"","source":"/var/log/pods/cbev_xxxx-tm-ingress-controller-cr-trf-envoy-xtzrz_a5ed5ae0-5148-4b40-9d7f-7029ee0433ce/initconfig/1.log","offset":3035,"timestamp":[675471967,1724076615],"ttl":-1}}

Filebeat Logs:

-> Is it Kubernetes -> It is a Kubernetes environment deployed on AWS
-> Data Duplication -> No Data duplication has been reported
-> Missing Data - Yes
-> Filebeat logging about file truncation when files were not truncated? - No

Hi @TiagoQueiroz ,

Any thoughts on it?

Not really :confused:

Unfortunately those few lines of info logs are not enough. Debug logs will help.

Which input are you using? Is it the container input?

The system has been upgraded

Is it now running the latest version of Filebeat (v8.15.0)?

The Container input uses the Log input under the hood, however the Log input has been deprecated and the Filestream input has got better debug logs.

The Filestream input can be used to ingest container logs, our docs for Filebeat on Kubernetes links to a manifest that contains an example of how to configure autodiscover with Filestream input:

filebeat.autodiscover:
 providers:
   - type: kubernetes
     node: ${NODE_NAME}
     hints.enabled: true
     hints.default_config:
       type: filestream
       id: kubernetes-container-logs-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
       paths:
       - /var/log/containers/*-${data.kubernetes.container.id}.log
       parsers:
       - container: ~
       prospector:
        scanner:
          fingerprint.enabled: true
          symlinks: true
       file_identity.fingerprint: ~

Another advantage of the Filestream input is that it supports the fingerprint file identity (used in the example above). The fingerprint file identity solves the problem of inode reuse because it calculates a hash of the first 1024 (that's configurable) bytes of the file and uses it as a identity instead of relying on the inode generated by the OS.

One very important thing to keep in mind is that if you change from the Container input to Filestream some files will be re-ingested because the state of the file from on input cannot be accessed by another.

Here upgraded in the sense we develop a micro-service on top of filebeat and release it periodically. The upgraded version is still on 8.12.1 version.

We have not configured filebeat.inputs section, we configured autodiscover with container type

We have not configured filestream input yet, work in progress to use filestream