How to configure FileBeat and LogStash to read multiple input files and process them separately?

Hello,

I have tried to look into similar threads, but I am not sure I got to a conclusive solution.
Here is my situation:

  1. I have a number N of log files, with same type of content

  2. I have already a LogStash config file working fine when FileBeat reads one of those logs. As I need to process different lines from the logs at once, the filter section in LogStash is a piece of code in Ruby. Aggregate filter didn't work, for example.

  3. Now, if I try to read the N logs with FileBeat at once, the lines from all of them get mixed, and my Ruby logic does not work anymore.

  4. I could have a different LogStash pipeline for each input log, if needed. I understand each one of them would have to listen to a different port, correct?

  5. What I don't see is how to configure FileBeat to send the data from each input log into a different port, so it can be processed by a different LogStash's pipeline.

In a nutshell:

  • FileBeat reads log1 and sends it to port #1 for LogStash pipeline #1
  • FileBeat reads log2 and sends it to port #2 for LogStash pipeline #2
  • etc

Is it possible? Makes sense?
I read some people solved it by running multiple Filebeat instances at once. Is that the only way?

Thanks a lot in advance.
Regards,
Jose

Using LS or Filebeat that way really reduce the performance and always become a bit tricky if you really on ordering to get right.

Maybe you could do the following:

  • Filebeat read all the events.
  • Send all events to a single LS inputs
  • Use the source field of the events to route the event to internal pipeline using the pipeline-to-pipeline

Maybe the above will work, but when keeping ordering is a still a bit tricky.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.