is there a complete(!) working example of a filebeat configuration on kubernetes where I only want the logs of a specific set of containers. Just dumping everything to es is trivial, but not what I want or need.
I have various configs working - at first glance - but then something is always wrong, e.g. I notice that only one pod on the node matching the criteria is shipping logs, etc. etc.
@Elastic: The documentation for this usecase (Hints, Autodiscover) is a disgrace. Sorry. I'm trying to get this up and running for several days now. Do yourself a favor and test your documentation on your average developer who doesn't know ELK and see how far he/she gets.
@The rest of the world: is there a working example out there? Please....
thanks for the reply. I'll try it, but @Elastic: is this really the way to go? Get the logs of ALL containers, even those that do not interest us and then drop them? Really? There must be a better way.
@Rainer_Alfoldi This used to be the case but in the 7.2 release you can now change the default behaviour so that nothing is scraped by default, only pods with co.elastic.logs/enabled set true
I've given up on the kubernetes way of logging. After being frustrated by a) the way Filebeat worked - or better didn't -b) the fact that I was missing 1.2 million log entries of 8 million c) the fact that es was using ridiculous amounts of CPU for a minimal amount of logs I started digging.
Logging to stdout and relying on kubernetes / docker to write the logs to /var/lib/docker/containers caused the log files being rotated every 30 seconds:
{"log":"2019-06-29 14:08:30 +0000 [info]: #0 following tail of /var/log/containers/oi-tau-outgoing-85d669dc5c-s7mx9_default_oi-tau-outgoing-d6dbf3352aff1d2e1270f2f13cd10a29f1337ea5b545eb167b84406c6e7477bd.log\n","stream":"stdout","time":"2019-06-29T14:08:30.855603611Z"}
{"log":"2019-06-29 14:09:00 +0000 [info]: #0 detected rotation of /var/log/containers/oi-tau-outgoing-85d669dc5c-s7mx9_default_oi-tau-outgoing-d6dbf3352aff1d2e1270f2f13cd10a29f1337ea5b545eb167b84406c6e7477bd.log; waiting 5 seconds\n","stream":"stdout","time":"2019-06-29T14:09:00.672009074Z"}
{"log":"2019-06-29 14:09:00 +0000 [info]: #0 detected rotation of /var/log/containers/oi-tau-outgoing-85d669dc5c-s7mx9_default_oi-tau-outgoing-d6dbf3352aff1d2e1270f2f13cd10a29f1337ea5b545eb167b84406c6e7477bd.log; waiting 5 seconds\n","stream":"stdout","time":"2019-06-29T14:09:00.782732153Z"}
{"log":"2019-06-29 14:09:01 +0000 [info]: #0 detected rotation of /var/log/containers/oi-tau-outgoing-85d669dc5c-s7mx9_default_oi-tau-outgoing-d6dbf3352aff1d2e1270f2f13cd10a29f1337ea5b545eb167b84406c6e7477bd.log; waiting 5 seconds\n","stream":"stdout","time":"2019-06-29T14:09:01.643500629Z"}
{"log":"2019-06-29 14:09:01 +0000 [info]: #0 following tail of /var/log/containers/oi-tau-outgoing-85d669dc5c-s7mx9_default_oi-tau-outgoing-d6dbf3352aff1d2e1270f2f13cd10a29f1337ea5b545eb167b84406c6e7477bd.log\n","stream":"stdout","time":"2019-06-29T14:09:01.643734902Z"}
{"log":"2019-06-29 14:09:32 +0000 [info]: #0 detected rotation of /var/log/containers/oi-tau-outgoing-85d669dc5c-s7mx9_default_oi-tau-outgoing-d6dbf3352aff1d2e1270f2f13cd10a29f1337ea5b545eb167b84406c6e7477bd.log; waiting 5 seconds\n","stream":"stdout","time":"2019-06-29T14:09:32.326208521Z"}
{"log":"2019-06-29 14:09:32 +0000 [info]: #0 detected rotation of /var/log/containers/oi-tau-outgoing-85d669dc5c-s7mx9_default_oi-tau-outgoing-d6dbf3352aff1d2e1270f2f13cd10a29f1337ea5b545eb167b84406c6e7477bd.log; waiting 5 seconds\n","stream":"stdout","time":"2019-06-29T14:09:32.389130924Z"}
{"log":"2019-06-29 14:09:33 +0000 [info]: #0 detected rotation of /var/log/containers/oi-tau-outgoing-85d669dc5c-s7mx9_default_oi-tau-outgoing-d6dbf3352aff1d2e1270f2f13cd10a29f1337ea5b545eb167b84406c6e7477bd.log; waiting 5 seconds\n","stream":"stdout","time":"2019-06-29T14:09:33.2547072Z"}
{"log":"2019-06-29 14:09:33 +0000 [info]: #0 following tail of /var/log/containers/oi-tau-outgoing-85d669dc5c-s7mx9_default_oi-tau-outgoing-d6dbf3352aff1d2e1270f2f13cd10a29f1337ea5b545eb167b84406c6e7477bd.log\n","stream":"stdout","time":"2019-06-29T14:09:33.254760373Z"}
The simplest solution - at least for me - is back to the basics. Every Deployment maps a hostPath and logs in that directory. Filebeat reads the same directory. Namespace conflicts are prevented by including the hostName in the log4J filename and everything just works.
@All: Thanks for the answers and greetings from Bern
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.