Filebeat isn`t collecting logs of short living containers like cronjobs.
We are using filebeat version 7.9.3 and also kubernetes autodiscovery.
I've created a cronjob which prints just one line and exits afterwards.
My assumption:
It looks like filebeat receives first an kubernetes pending event (No action is taken if a pending event is coming in, read from source code and logs) and the second time it receives the PodSucceeded (stop event gets emitted, read from source code and logs) event.
Therefore the registry gets cleaned before filebeat is reading the file and the log entry never appears in Kibana.
If I pause the container for 2 seconds before shutting down, the log appears in Kibana.
Do you have any workaround for this kind of issue?
Thank you Marcin.
I wont use autodiscovery any longer and will try to use filebeat inputs with add_kubernetes_metadata_processor.
This seems as a solution so far.
I also think not that the solution should be to add a sleep cmd at the end.
I will update this thread in a few days to give feedback if my workaround is working properly.
I got all logs now. What still happens is that add_kubernetes_metadata is sometimes not able to add kubernetes information. Therefore our developers have to add the applicationname to there logs.
Sometimes, if logs do not appear it could also be a mapping conflict or something else.
In Kibana I saw mapping errors with following query: kubernetes.namespace: "filebeat" and log.level: "warn"
In my environment it still doesn't work. I am using autodiscover and I think that it's a bad idea not using autodiscover because it works good.
So, I noticed that sometimes cronjob output is logged to elasticsearch but not for all pods of cronjob
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.