So I was wondering if there's a way that when you read a log file, that somehow there's an extra config in that "folder" that could potentially tell filebeat how to ingest the files?
Currently the config is global in filebeat...
But I was thinking if I could drop a properties file in the path where the log files are and filebeat could maybe read those properties and for instance tag the log as a monthly or maybe yearly index.
Currently, I'm using DC/OS and loosely following the 12 factor app. My applications log to stdout and stderr and those are managed by DC/OS.
Filebeat is configured to read the the path where DC/OS puts the logs for all containerized apps.
The big plus to this is that when I deploy a marathon application it automatically gets indexed to Kafka and then I use various consumer technologies to read the log topic.
The problem is some application generate lots more logs then others. So it would be nice at ingest time without having to always reconfigure filbeat to some how be able to pre determine the index or a way to tag the log so then the consumer side can make some decision when inserting to Elastic.
So when I deploy my marathon app, I could maybe copy a little properties file somewhere.
I know I can do filebeat side cars but that to resource intensive I think. I.e: every web service would need a filebeat instance running...
I guess maybe the other way is to maybe do a lookup in logstash or the consumer and determine how to index the log but doing that requires external system which can potentially be down...