Several configuration files for filebeat processors?

Hello, I'm using filebeat to parse multiple log files and send them directly to elasticsearch.

I use processors to parse those files and I would like to separate those processors by file.
I first tried to create modules but the only way to parse logs that I found in modules is ingest_nodes.
Since I try to keep compatibility with elasticsearch 2, I can't use ingest nodes.

Is there a way to split filebeat configuration in several files or to use processsors in modules ?


After reading documentation more carefully, it turns out Filebeat modules require Elasticsearch 5.2 or later.

So the only thing that could help me is a way to split filebeat.yml

Unfortunately, it's not possible to have separate processors for each input. Those are global to a pipeline which forwards the events to ES.

Filebeat can have multiple config files. Put your configs under the folder set in filebeat.config.inputs.path.

    enabled: true
    path: inputs.d/*.yml

Also, I fail to see how config separation is going to solve your problem. Could you provide more info on that?

1 Like

Thank you for your answer @kvch .
That's a workaround, I'll just add a when source equals "myfile.log" in each configuration to get "separated" configurations.
By having separated files, It will be more readable and easier for me to generate those files. My processors will just ignore events that they don't have to modify and I won't have a huge filebeat.yml file.

That is considering we can separate processors in different files

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.