The log of the pod of azure kubernetes system is collected from the worker node.
Log collection does not start when the pod reboots and moves the worker node.
Is there any good workaround?
Hi!
Can you please provide more information about your case?
I cannot understand how the Pod's restart is related with the node removal. What do you exactly mean by move the worker node
?
C.
Hi!
I'm using filebeat 6.8.
Three worker nodes on the azure kubernetes system are running in a cluster.
There is a filebeat for each worker.
The pod for which you want to collect logs went down and started on another worker node.
After that, the collection of log files stopped.
When I looked it up, the log of the log collection pod started on a different node is inside the filebeat pod.
It doesn't seem to be logged because it's not mounted.
Is there a setting to collect logs even if it is started on a different worker node?
I set file_identey but it didn't work.
Hey!
How you deploy Filebeat on k8s nodes? Could you share your k8s manifests?
You should figure out why log file is not mounted inside Filebeat's Pod. This is the first thing we need to resolve.
C.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.