I want to use Filebeat (current version) to collect logs from our Kubernetes Cluster by using this manual: Run Filebeat on Kubernetes | Filebeat Reference [8.6] | Elastic
I want to control if the message of a cointainer should be parsed as json or not due two reasons:
1.) Not every Pod in our Cluster is logging in Json
2.) I want to make sure that only ECS compliant Json is parsed
My favorite solution is to configure filebeat to output to a remote logstash elastic_agent input (one reason for that is that I can manipulate messages in logstash easily if necessary). Sending the events to a logstash instance is working without a problem BUT if I use this configuration below in filebeat to gather the logfiles on the kubernetes node, the message field is automatically parsed (NOT using annotations):
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
If I remove this part:
${data.kubernetes.container.id}
So that the input path is just:
- /var/log/containers/*.log
Its not being parsed anymore, but filebeat seems to mix up messages one container exposes with informations of other containers running in the cluster. So overall its not useable, and I wonder what am I doing wrong?
In short: To me it looks like adding ${data.kubernetes.container.id}
in the path automatically activates the json parser.
Additionaly, its strange that an error.message and error.type field is added with the following values when the message field is automatically parsed: