Parsing multiline logs of various formats

I have multiple multiline log files of various formats on the same machine. I ship them using one filebeat instance to logstash (deployed on another machine).
My main issue is where/how to use the multiline option.

Initially I was using the logstash multiline codec:
input { beats { port => "5044" codec => multiline { pattern => "^(expr1)|(expr2)|...|(exprn)" negate => true what => previous } } }
Then I realized that more file formats I going to add more regex expresions "^(expr1)|(expr2)|...|(exprn)" I need to create which is not easy to maintain and is not very efficient.

So I moved the multiline to the filebeat. But here there is an issue too. I can not create a multiline expression per prospector, I can only create one multiline expresion per filebeat than means the long complicated expression from previous example: "^(expr1)|(expr2)|...|(exprn)"
multiline.pattern: '^(expr1)|(expr2)|...|(exprn)' multiline.negate: false multiline.match: after

Then I came to another solution: one filebeat per log file format containing a simple multiline expression "^(exprX)". But question is how to install multiple filebeat agents on the same machine? I install it via Puppet using rpm for Redhat.

Can you please advise what would be the right choice in my case?

I made a mistake in my previous post.
I can configure a multiline rule per prospector so I do not need multiple filebeat instances.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.