Is there a way to read extra config during ingest of logs?

So I was wondering if there's a way that when you read a log file, that somehow there's an extra config in that "folder" that could potentially tell filebeat how to ingest the files?

Currently the config is global in filebeat...

But I was thinking if I could drop a properties file in the path where the log files are and filebeat could maybe read those properties and for instance tag the log as a monthly or maybe yearly index.

Currently, I'm using DC/OS and loosely following the 12 factor app. My applications log to stdout and stderr and those are managed by DC/OS.

Filebeat is configured to read the the path where DC/OS puts the logs for all containerized apps.

The big plus to this is that when I deploy a marathon application it automatically gets indexed to Kafka and then I use various consumer technologies to read the log topic.

The problem is some application generate lots more logs then others. So it would be nice at ingest time without having to always reconfigure filbeat to some how be able to pre determine the index or a way to tag the log so then the consumer side can make some decision when inserting to Elastic.

So when I deploy my marathon app, I could maybe copy a little properties file somewhere.

I know I can do filebeat side cars but that to resource intensive I think. I.e: every web service would need a filebeat instance running...

I guess maybe the other way is to maybe do a lookup in logstash or the consumer and determine how to index the log but doing that requires external system which can potentially be down...

Hey!

Since you have containerized applications, maybe Autodiscover could fit to your needs.

I was thinking how about ingest pipeline? Maybe I can configure an ingest pipeline that catch the log before run some "filter" rule and then inserts?

So Filebeat pushes all logs to a Kafka topic and then log stash reads the Kafka topic does some extra parsing and then inserts to data nodes.

Can ingest pipelines work directly on a data node or I need to specifically push to ingest node?

If you can solve it with just using pipelines then it is fine.

Filebeat can load pipelines in Elasticsearch, so it can work without having a Logstash in the middle.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.