Hi,
I'm trying to digest logs with Filebeat on our Kubernetes cluster which is using ECK on Kubernetes. We are migrating logs to a JSON format, but many legacy or 3rd party containers use plain logs. So we need to support both in the long run.
I came across another topic covering this exact challenge, but couldn't make it work with the suggested solution:
I'm pretty new in Filebeat and the whole Elasticstack - so I may have trouble understanding the nuances in the configuration. The topic suggests using "exclude_lines" to split JSON and non-JSON logs.
Filebeat config:
config:
filebeat.inputs:
- include_lines:
- '^{'
json.add_error_key: 'true'
json.expand_keys: 'true'
json.keys_under_root: 'true'
json.overwrite_keys: 'true'
paths:
- /var/log/containers/*.log
type: container
- exclude_lines:
- '^{'
paths:
- /var/log/containers/*.log
type: container
This config throws an error, that I need to add a "message_key" when using exclude.
Based on my understanding I set it to "log" since the docker container logs are a JSON themselves and "log" contains the JSON string.
The result is, that I don't see any of my JSON longs in Elasticstack, but rather a bunch of JSON parse errors.
ERROR [reader_json] readjson/json.go:74 Error decoding JSON: json: cannot unmarshal number into Go value of type map[string]interface {}
In case I set json.add_error_key: 'false' - I can see the non-JSON logs, but not my JSON logs.
So in short:
- exclude_lines / include_lines doesnt seem to work like it should.
- I dont realy understand why the "message_key" is needed to exclude the line - isnt type:container already unwrapping the "log" field? In that case - why do I need it at all?