I am currently using the Apache module in Filebeat to process my Apache logs. My setup involves sending these logs to Logstash and then to Elasticsearch.
Additionally, if I enable two or more modules in Filebeat, how can I differentiate the indices in Elasticsearch and associate the respective pipelines with them?
Any guidance would be greatly appreciated. Thank you.
Unfortunately, I cannot directly access the servers where Filebeat is installed to reach Elasticsearch. For this reason, I need to go through Logstash.
So, I was wondering how I can load the ingest pipelines. Would it be possible, in your opinion, to do this through the installation of the relevant integration assets?
Also, considering this configuration:
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
taken from the documentation, how would I create custom indices, for example, for Apache Access and Apache Error? From this configuration, it seems that the indices would be grouped, is that correct?
You may run the filebeat setup process from any server, you just need a filebeat instance that is able to connect to Elasticsearch to load the ingest pipelnes.
For example you can install a filebeat on your Logstash server just to setup the ingest pipelines.
Yes, this configuration will store everything in a data stream named filebeat-version.
You can change the index name in logstash, but you will also probably need to change the setup.template.name and setup.template.pattern in the filebeat.yml of the filebeat instance you will use to load the intest pipelines.
Not sure how this will work as I do not use filebeat with custom indices, you will need to test it.
For the second part, I would need some advice. In your opinion, if I leverage fileset.name and fileset.module, could I create custom indices for each module and name?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.