Create multiple logstash configurations for different Filebeat instances

As of now, I am using Docker compose to set up an image of Logstash. Here is my use case:

I have Filebeat installed on 5 machines, each with different log paths and log formats. Instead of having one logstash.conf file that processes everything (with [tags] and 'else if' statements), I would like to decompose that into multiple .conf files.

I did look into pipeline.yml, but I was confused on how Logstash will be able to figure out that a certain filebeat instance should be using a certain configuration file. Is the pipeline.id something I need to specify in Filebeat? Any help would be much appreciated.

If you run five completely independent pipelines, then each one would listen on a different port, so the logstash.output would be different in each filebeat instance.

Port availability would be an issue. I was thinking of implementing a single pipeline that re-directs the logs to the right .conf file.

If I specified something like this:

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"

Would that merge all the configuration files?

This would merge all the configuration files and run just one pipeline.

What you could try is to combine multiple pipelines with the pipeline-to-pipeline communication, more specific the distributor pattern.

You would have one pipeline with just one port listening to your beats, then you would need to use a conditional to filter your output and send them to the other pipelines.

Perfect! Is there anything wrong with merging the configuration files? I assume it would process the data as intended.

And just to make sure, if I set up a distributed pattern pipeline, would it look something like this:

- pipeline.id: main
  config.string: |
    input {beats { port => 5044 }}
    filter {
       if [fields][log_type] == "XYZ" {pipeline { send_to => xyz-pipeline }}
       else if [fields][log_type] == "ABC" {pipeline {send_to => abc-pipeline}}
    }
#output would be specified in config files

- pipeline.id: xyz-pipeline
  path.config: "/etc/path/to/p1.config"

- pipeline.id: abc-pipeline
  path.config: "/etc/different/path/p2.cfg"

Almost, you need to use the conditional in the output block, not in the filter block.

You also do not need to have the config inside the pipelines.yml, I think it is better to have it in a separate file, like your other configs.

So you can create a file named main.config with the following configuration:

main.config

input {
    beats { 
        port => 5044 
    }
}
output {
    if [fields][log_type] == "XYZ" {
        pipeline { send_to => xyz-pipeline }
    } else if [fields][log_type] == "ABC" {
        pipeline { send_to => abc-pipeline }
    }
}

Then your pipelines.yml would be:

- pipeline.id: main
  path.config: "/etc/path/to/main.config"

- pipeline.id: xyz-pipeline
  path.config: "/etc/path/to/p1.config"

- pipeline.id: abc-pipeline
  path.config: "/etc/different/path/p2.cfg"

This way you will have 3 pipelines, the first one would just receive data from beats and output it to the other pipelines where you will have your filters and your final output.

There is nothing wrong in having just one pipeline and point the config to a folder with multiple files, but since you have multiple different formats of logs, you would need to have conditionals for both your filters and outputs, but it is up to you what will work best.

The multiple pipelines has the advantage of completely isolated the pipelines, you are at no risk of the events from one pipeline ending up in the output of another pipeline, this can happen if you point to a folder with multiple files and have a misconfigured conditional.

Perfect thank you so much! This was very helpful.

Sorry one last question, would I leave my other .config files (abc.config and xyz.config) in the same format (input, filter, then output)?

I assume I would be taking out the input portion of the configuration as that is being processed by main.config, but leave the filter and output configuration the same.

Every logstash config file needs an input and output block, the filter is optional.

As the example in the documentation shows, if you have an output that is sending the events to another pipeline, then you need another pipeline with the pipeline input to receive the events.

For example, to receive that that is being send to the xyz-pipeline, you will need this input.

input {
    pipeline {
        address => "xyz-pipeline"
    }
}

Perfect, thank you! I think that's all the questions I had :grinning:

I guess another question popped up just now while I was configuring this. So by default logstash will look at pipeline.yml.

Do I need to specify anything in my logstash.yml file regarding the main pipeline? I plan on using the default pipeline workers, batch size, ect.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.