Hello
Could you please help us with this issue: we found out that some of our pod logs are missing Kubernetes metadata.
We are running on Kubernetes version 1.24 and using Karpetener to provision the nodes, The Filebeat version is set to 8.5.1.
When analyzing the logs we found out that each time a new node is scaled up, the first logs sent by filebeat deamonset are without kubernetes metadata (filebeat takes ~ 2min to start adding the kubernetes fields), and this is problematic for us since we have some cron jobs whose lifetime duration is lower than the needed time for Filebeat to grab the metadata. This behavior is impacting our monitoring and log analysis.
and this is not the only weird use case we have, we spot as well that there are missing fields while preceding and following logs have all fields although logs are processed by the same filebeat agent and coming from the same pod that is producing the log. How can we explain that ?
Below snippet is out filebeat.ini config:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
json:
keys_under_root: true
expand_keys: true
ignore_decoding_error: true
overwrite_keys: true
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"