I tried the config settings for autodiscover described here:
I confirmed that the pod whose log I want to capture does produce output when I run the command kubectl logs [podName]
I also confirmed the existence of the logs (flat files) by executing kubectl exec.
However, the logs do not appear to be shipped to Elasticsearch. When I try to create an index pattern, nothing matches in Kibana Stack Management nor does any results appear in Kibana Discover under the filebeat* index pattern.
Here are the relevant sections of the filebeat-kubernetes.yaml manifest file
We actually have 6 nodes running in our Rancher/K8 setup. Would I enter all 6 node names into the NODE_NAME config in the manifest file if we needed to monitor all of them?
You should look at this cool solution I just showed another .. .Or you will need to setup up all your custom templates, naming etc..etc.. which is fine but it is a lot of work.
In our pre-Kubernetes/Rancher days, we had been collecting logs from 6 different inputs, and so we had 6 input entries in our old filebeat.yml file.
What you advise for multiple input configuration looks similar enough, yet cleaner and more maintainable. I'll look into implementing this in our filebeat-kubernetes.yaml.
This resulted in no logs collected, with no index pattern matching "logs-*" in Kibana.
I then tried hardcoding the index property under output.elasticsearch but that did not work either.
I then commented out the index line and was once again able to see logs in Kibana under the "filebeat-*" index pattern.
Do I need to put the setup.template lines back into my filebeat-kubernetes.yaml? That's the only thing I can think of that is breaking log collection from Kubernetes.
I would like to configure a separate Filebeat input per container so that we can have a separate dashboard in Kibana for each container.
What is the correct way to set up one input per container?
I also tried adding a line to the fields section instead of the processors section, like this - but log collection still did not resume.
Do we need to switch to the autodiscover configuration? All examples of kubernetes.container.name being used in a manifest file only show this property under autodiscover.
I literally just now figured out how to get log collection going again. This turned out to be the way - do the matching for the desired log source in the paths not the fields:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.