Filebeat 7.17.3 invalid CRI log format error

I am getting various CRI parsing errors, on Filebeat 7.17.3. I have cleared all filebeat state and restarted Filebeat, but these errors always occur. Here are a couple:
FYI :- I am using Nomad containers.

 ERROR   [reader_docker_json]    readjson/docker_json.go:231     Parse line error: parsing CRI timestamp: parsing time "\x06stdout\x10\xf1\x9a\x92\xbb\xa2\xa5\x9d\xfc\x16\x1a\x98\x03{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no" as "2006-01-02T15:04:05.999999999Z07:00": cannot parse "\x06stdout\x10\xf1\x9a\x92\xbb\xa2\xa5\x9d\xfc\x16\x1a\x98\x03{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no" as "2006"

and

2022-06-14T10:16:39.568Z        ERROR   [reader_docker_json]    readjson/docker_json.go:231     Parse line error: invalid CRI log format
2022-06-14T10:16:39.568Z        ERROR   [reader_docker_json]    readjson/docker_json.go:231     Parse line error: invalid CRI log format

All the log lines from the input files referenced appear to be valid JSON, one JSON object per line.

Here is my config:

filebeat:
inputs:
- paths:
- "/var/lib/docker/containers/*/*.log"
type: container
fields:
type: api-error
environment: development
datacenter: ptc1
tags:
- nomad
- access
- api
fields_under_root: true
json.keys_under_root: true
json.overwrite_keys: true

Can someone please help me out.

Hi @krish0608,

It seems that the log lines from your files are not a single JSON document, the error says that Filebeat is trying to parse

\x06stdout\x10\xf1\x9a\x92\xbb\xa2\xa5\x9d\xfc\x16\x1a\x98\x03{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no

as the year of a date.

Maybe the "Nomad containers" are logging in a format different than Filebeat expects. I know the container input works for Docker containers.

Also if you have special characters in your logs, you might also need to configure the encoding, see the docs for more details: Container input | Filebeat Reference [8.2] | Elastic.

If you don't manage to find out why you're getting this parse error, could you share some of your log files (at least some lines)?

Hi @TiagoQueiroz ,

Thanks for your response. Let me share you sample log format.

stdoutۋ▒▒▒ް▒X/backup/elasticsearch/nodes/0/indices/I3CO-E70SESNejf1Y7t2xg/0/translog/translog-24.tloglk
stdoutޚ▒▒▒ް▒W/backup/elasticsearch/nodes/0/indices/I3CO-E70SESNejf1Y7t2xg/0/translog/translog-24.ckpkl
stdout▒▒▒▒▒ް▒X/backup/elasticsearch/nodes/0/indices/I3CO-E70SESNejf1Y7t2xg/0/translog/translog-25.tloglk
stdout▒▒▒▒▒ް▒W/backup/elasticsearch/nodes/0/indices/I3CO-E70SESNejf1Y7t2xg/0/translog/translog-25.ckpkl
stdout▒▒▒▒▒ް▒X/backup/elasticsearch/nodes/0/indices/I3CO-E70SESNejf1Y7t2xg/0/translog/translog-26.tloglk
stdout▒▒▒▒▒ް▒W/backup/elasticsearch/nodes/0/indices/I3CO-E70SESNejf1Y7t2xg/0/translog/translog-26.ckpkl
stdout▒▒▒▒▒ް▒X/backup/elasticsearch/nodes/0/indices/I3CO-E70SESNejf1Y7t2xg/0/translog/translog-27.tloglk
stdout▒▒▒▒▒ް▒W/backup/elasticsearch/nodes/0/indices/I3CO-E70SESNejf1Y7t2xg/0/translog/translog-27.ckpkl
stdout▒▒▒▒▒ް▒X/backup/elasticsearch/nodes/0/indices/I3CO-E70SESNejf1Y7t2xg/0/translog/translog-28.tlogl

Attaching Logs from different containers:

V▒dout▒▒▒▒▒▒▒B[taskScheduler-4] ERROR com.octoperf.hn - Error in scheduled task
▒odout٦▒▒▒▒▒▒[octoperf_user] ElasticsearchStatusException[Elasticsearch exception [type=index_not_found_exception, reason=no such index [octoperf_user]]]
opdout▒▒▒▒▒▒▒[  at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177)
p{dout▒▒▒▒▒▒▒\  at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1727)
{{doutտ▒▒▒▒▒g   at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1704)
{sdout▒Ę▒▒▒▒g   at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1467)
s▒dout▒͘▒▒▒▒_    at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1424)
▒jdout▒ؘ▒▒▒▒m    at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1394)
jJdout▒㘰▒▒▒V   at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:930)
JAdout▒蘰▒▒▒6   at com.octoperf.fz.lambda$search$d3a35df6$1(fz.java)
A3dout▒혰▒▒▒-   at io.vavr.control.Try.mapTry(Try.java:634)
35dout▒▒▒▒▒▒▒   at com.octoperf.fz.a(fz.java)
55dout▒▒▒▒▒▒▒!  at com.octoperf.a6o.a(a6o.java)
55dout▒▒▒▒▒▒▒!  at com.octoperf.at9.a(at9.java)
55dout▒▒▒▒▒▒▒!  at com.octoperf.at9.a(at9.java)
57dout▒▒▒▒▒▒▒!  at com.octoperf.a4z.d(a4z.java)
7▒dout▒▒▒▒▒▒▒#  at com.octoperf.a4o.run(a4o.java)
▒ydout㪙▒▒▒▒x   at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
yjdoutŰ▒▒▒▒▒e   at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
j[dout▒▒▒▒▒▒▒V  at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
[▒dout▒▒▒▒▒▒▒G  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
▒rdout▒▒▒▒▒▒▒}  at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
rrdout▒ř▒▒▒▒^   at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
rHdout▒Ι▒▒▒▒^   at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
H▒dout▒ԙ▒▒▒▒4   at java.base/java.lang.Thread.run(Thread.java:834)
stdout▒ٙ▒▒▒▒▒    Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://10.124.4.51:9200], URI [/octoperf_user/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&scroll=30000ms&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrip▒▒true], status line [HTTP/1.1 404 Not Found]
stdout▒䙰▒▒▒▒{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [octoperf_user]","resource.type":"index_or_alias","resource.id":"octoperf_user","index_uuid":"_na_","index":"octoperf_user"}],"type":"index_not_found_exception","reason":"no such index [octoperf_user]","resource.type":"index_or_alias","resource.id▒b"octoperf_user","index_uuid":"_na_","index":"octoperf_user"},"status":404}
badout▒꙰▒▒▒N             at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:253)
aadout▒▒▒▒M            at org.elasticsearch.client.RestClient.performRequest(RestClient.java:231)
a|dout▒򙰃▒▒M             at org.elasticsearch.client.RestClient.performRequest(RestClient.java:205)
|"dout▒▒▒▒▒▒▒h          at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1454)
"ddout▒▒▒▒▒▒▒           ... 19 more

Contianer3 Logs:

derr▒䰬覲▒KFATAL:  unsupported frontend protocol 1234.5680: server supports 1.0 to 3.0__
stderr೨▒즲▒KFATAL:  unsupported frontend protocol 1234.5680: server supports 1.0 to 3.0_3
stderr▒▒▒▒▒▒▒▒LOG:  incomplete startup packet33
stderr▒▒ؿ▒▒▒▒LOG:  incomplete startup packet33
stderr▒▒▒▒Ƨ▒▒LOG:  incomplete startup packet33
stderr▒▒▒짲▒LOG:  incomplete startup packet33
stderr▒ۢ▒▒▒▒▒LOG:  incomplete startup packet3_
stderr▒▒ӂ▒▒▒▒KFATAL:  unsupported frontend protocol 1234.5680: server supports 1.0 to 3.0_3
stderr߳▒▒▒▒▒▒LOG:  incomplete startup packet3_
stderrۼ▒▒▒▒▒▒KFATAL:  unsupported frontend protocol 1234.5680: server supports 1.0 to 3.0_3
stderrղ▒▒ۨ▒▒LOG:  incomplete startup packet33
stderr▒▒킁▒▒▒LOG:  incomplete startup packet33
stderr▒▒▒▒▒▒▒▒LOG:  incomplete startup packet33
stderrȀ▒▒˩▒▒LOG:  incomplete startup packet3_
stderr▒▒թ▒▒KFATAL:  unsupported frontend protocol 1234.5680: server supports 1.0 to 3.0_3
stderr▒▒▒▒▒▒LOG:  incomplete startup packet33
stderr▒Ą▒▒▒▒▒LOG:  incomplete startup packet33
stderr▒▒ɥ▒▒▒▒LOG:  incomplete startup packet33
stderrԀ▒▒લ▒LOG:  incomplete startup packet33
stderr▒▒▒慫▒▒LOG:  incomplete startup packet3_
stderr▒ڞԘ▒▒▒KFATAL:  unsupported frontend protocol 1234.5680: server supports 1.0 to 3.0_3
stderr▒ᕇ▒▒▒▒LOG:  incomplete startup packet33
stderr▒▒ЧЫ▒▒LOG:  incomplete startup packet33
stderr▒▒▒▒▒▒▒▒LOG:  incomplete startup packet33
stderrƴ▒蚬▒▒LOG:  incomplete startup packet33

Container4 Logs:

stdout▒▒▒▒▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,774Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [ingest-user-agent]" }
stdout¿▒▒▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,774Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [lang-expression]" }
stdout▒գ▒▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,774Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [lang-mustache]" }
stdout▒ڣ▒▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,775Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [lang-painless]" }
stdout▒䣤▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,775Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [mapper-extras]" }
stdout▒裤▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,775Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [parent-join]" }
stdout▒죤▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,775Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [percolator]" }
stdout▒▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,775Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name"▒ "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [rank-eval]" }
stdout▒򣤣▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,775Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name":▒"md-octolgd701.dev1.mdgapp.net", "message": "loaded module [reindex]" }
stdout▒▒▒▒▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,775Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [repository-url]" }
stdout▒▒▒▒▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,775Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.nam▒": "md-octolgd701.dev1.mdgapp.net", "message": "loaded module [transport-netty4]" }
stdout▒▒̪▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:13,785Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name"▒."md-octolgd701.dev1.mdgapp.net", "message": "no plugins loaded" }
stdoutӫƔ▒▒▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:21,257Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "docker-cluster", "node.name.▒ "md-octolgd701.dev1.mdgapp.net", "message": "using discovery type [single-node] and seed hosts providers [settings]" }
stdout▒亯½▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:22,116Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "md-octo▒▒d701.dev1.mdgapp.net", "message": "initialized" }
stdout▒▒▒▒½▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:22,116Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "md-oct▒(gd701.dev1.mdgapp.net", "message": "starting ..." }
stdout▒ո▒ý▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:22,505Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "docker-cluster", "node.nam(C: "md-octolgd701.dev1.mdgapp.net", "message": "publish_address {10.124.4.51:9300}, bound_addresses {[::]:9300}" }
stdout▒▒▒▒ý▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:22,533Z", "level": "WARN", "component": "o.e.b.BootstrapChecks", "cluster.name": "docker-cluster", "node.namC": "md-octolgd701.dev1.mdgapp.net", "message": "max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]" }

stdout▒▒▒▒ý▒▒▒{"type": "server", "timestamp": "2022-06-14T09:15:22,538Z", "level": "INFO", "component": "o.e.c.c.Coordinator", "cluster.name": "docker-cluster", "node.name": "md-octolgd701.dev1.mdgapp.net", "message": "cluster UUID [-fMt4ofDTWeqAVJkz8h5Vg]" }

Thanks for the quick reply!

Those logs are not in the correct format. Quick question: how did you get those logs? Did you fetch them directly from the same log file Filebeat is reading? Or did you use some command/tool to print them on your terminal?

Anyways, none of those logs are compatible with the CRI format, to give you an example, this is what Filebeat expects as a CIR log:

2017-09-12T22:32:21.212861448Z stdout F 2017-09-12 22:32:21.212 [INFO][88] table.go 710: Invalidating dataplane cache

This come from one of our test files.

I did a quick research, but did not find a forma specification of the CRI format, however a few different places all reference the Kubernetes code that parses it: kubernetes/logs.go at master · kubernetes/kubernetes · GitHub, where you cal also find a couple more examples:

2016-10-06T00:17:09.669794202Z stdout P log content 1
2016-10-06T00:17:09.669794203Z stderr F log content 2

To me it looks like your log examples contain some binary data at their beginning instead of the timestamp printed using ASCII characters.

Yes i Did fetch them directly from the same log file Filebeat is reading. Just want to check one thing is there any way through which i can make filebeat read these different log format.

There are a few different ways to process/transform the data Filebeat is ingesting before it's available on Elasticsearch/Kibana:

  1. Filebeat processors
    There are a number of processors that can manipulate the ingested data: Define processors | Filebeat Reference [8.2] | Elastic, among those, the script processor can be quite powerful because it allows you to write some Javascript code to transform the ingested data. Script Processor | Filebeat Reference [8.2] | Elastic

  2. Ingest pipelines on Elasticsearch
    Documentation: Ingest pipelines | Elasticsearch Guide [master] | Elastic
    Most (if not all) Filebeat's modules use ingest pipelines to transform the data shipped by Filebeat and transform it into ECS. You can see an example from Nginx access logs here: beats/pipeline.yml at main · elastic/beats · GitHub.

You could also try to have your logs in JSON format instead of CRI and see if they can be parsed.