Hello, I'm using filebeat to parse multiple log files and send them directly to elasticsearch.
I use processors to parse those files and I would like to separate those processors by file.
I first tried to create modules but the only way to parse logs that I found in modules is ingest_nodes.
Since I try to keep compatibility with elasticsearch 2, I can't use ingest nodes.
Is there a way to split filebeat configuration in several files or to use processsors in modules ?
Thank you for your answer @kvch .
That's a workaround, I'll just add a when source equals "myfile.log" in each configuration to get "separated" configurations.
By having separated files, It will be more readable and easier for me to generate those files. My processors will just ignore events that they don't have to modify and I won't have a huge filebeat.yml file.
That is considering we can separate processors in different files
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.