Hi, as you probably know this is a quite new feature , we are continually adding new features to it to improve user experience, so I want to thank you for your feedback, it helps us to shape future features.
For the case of NGINX, it sends access logs to stdout, and error logs to stderr. I recently implemented a way to filter on the stream, so you can pass the correct output to each fileset of a module. This is how it will look in filebeat:
It was not working otherwise... I have to put the path of the exact container (see my config)
Only 'access' file-set is working - if I put bot (like your config) - all messages end in error in elasicsearch (because actually there is only one log file in container of both streams)
kubernetes metadata fields ignored in the condition part (when launched by kubernetes).
What about my apps logs (not nginx, apache, etc) - how to define it??
I see now why it wasn't working for you, there is a typo: container.ids -> containers.ids.
It depends on how nginx is configured, it must output error log to stderr. Also if you see parsing errors, it's because stream filter is not yet working (unreleased).
you are using docker autodiscover provider, we plan to release a kubernetes autodiscover one soon. It will give you access to kubernetes metadata.
You can add more conditions for them, templates setting is a list, so you can put as many as needed
Hi,
I am also trying to do something similar to what Asher is doing. I currently have a prospector that adds Docker metadata to my logs and ships it to Logstash. It looks like this:
What I want to understand is how Autodicover works. Should I replace the prospector with the Autodiscover settings, or does Autodiscover apply to what my prospectors are generating?
When you use filebeat.prospectors you define a static configuration, it will be the same while Filebeat is running.
Autodiscover allows you to define conditions and launch different configurations live, based on Autodiscover events from the provider (Docker).
If that configuration works for you, you don't need to use autodiscover. If, on the contrary, you want to apply different patterns depending on the container, you may want to define autodiscover rules for that.
Hi,
I meant, logs lines were transferred to ElasticSearch but not parsed.
If I leave only 'access' - then it's ok (but only these lines).
In nginx container - they do a symlink from error log to stderr, and from access log to stdout - In terms of Docker, you see only one file ---container-id---.json in var/lib/docker/containers/---container-id--
Asher
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.