Using Filebeat Modules with Logstash and Differentiating Indices in Elasticsearch


I am currently using the Apache module in Filebeat to process my Apache logs. My setup involves sending these logs to Logstash and then to Elasticsearch.

I understand from the documentation (Working with Filebeat Modules | Logstash Reference [8.12] | Elastic) that it's possible to use Filebeat modules with Logstash, but some extra setup is required. Could someone please clarify what this extra setup involves?

Additionally, if I enable two or more modules in Filebeat, how can I differentiate the indices in Elasticsearch and associate the respective pipelines with them?

Any guidance would be greatly appreciated. Thank you.

Did you check the follow-up documentation on the same page? This one, it explains how to configure it.

Basically you need to load the ingest pipelines in Elasticsearch and configure your logstash output to use them.

Hi Leandro,

Thank you for your response.

Unfortunately, I cannot directly access the servers where Filebeat is installed to reach Elasticsearch. For this reason, I need to go through Logstash.

So, I was wondering how I can load the ingest pipelines. Would it be possible, in your opinion, to do this through the installation of the relevant integration assets?

Also, considering this configuration:

index => "%{[@metadata][beat]}-%{[@metadata][version]}"

taken from the documentation, how would I create custom indices, for example, for Apache Access and Apache Error? From this configuration, it seems that the indices would be grouped, is that correct?


You may run the filebeat setup process from any server, you just need a filebeat instance that is able to connect to Elasticsearch to load the ingest pipelnes.

For example you can install a filebeat on your Logstash server just to setup the ingest pipelines.

Yes, this configuration will store everything in a data stream named filebeat-version.

You can change the index name in logstash, but you will also probably need to change the and setup.template.pattern in the filebeat.yml of the filebeat instance you will use to load the intest pipelines.

Not sure how this will work as I do not use filebeat with custom indices, you will need to test it.


Okay, perfect, excellent solution.

For the second part, I would need some advice. In your opinion, if I leverage and fileset.module, could I create custom indices for each module and name?

Example Logstash configuration:

input {
  beats {
    port => 5044

output {
  elasticsearch {
    hosts => ""
    manage_template => false
    index => "%{[fileset][module]}-%{[fileset][name]}"
    action => "create"
    user => "elastic"
    password => "secret"

In this example, Logstash will dynamically create indices based on the combination of fileset.module and


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.