Process more than 3000 lines per record in filebeat

  When using the multiline.max_lines and multiline.time_out  file beat does no pick any record.
  Since the records are bigger than default I must set multiline.max_lines, correct? Please advice how to do that correctly.

Here is the prospector:
- type: log
enabled: true
paths:
- /home/vpwrk1/ElasticDataForTest/test*.log
fields:
type: connectoromsmngsrv
fields_under_root: true
exclude_lines: ["-------------------"]
multiline.pattern: '^[[:graph:]]{3}\s[a-zA-Z]+\s\d+[[:graph:]]\s\d{4}\s\d+[[:graph:]]\d+[[:graph:]]\d+\s[A-Z]+\s[A-Z]+\s[[:graph:]]{3}'
multiline.negate: true
multiline.match: after
multiline.max_lines: 4000
multiline.timeout: 4

thanks

Yes, for records with more than 500 lines you need to set max_lines. Can you see any related error in filebeat logs?

Today when I set max_lines to 3500 ( record size is ~ 3350) I got this in the log:
2018-05-16T16:31:33+03:00 DBG Drop line as it does match one of the exclude patterns*** January 25, 2018 8:03:40 PM PHT ***
[1] Executing 'GetCustomerList' contract
However when I set max_lines to 2500 the SAME record ( dropped above) is passed to logstash, and filebeat has no issue with the date. In this case of course logstash receives a cut to 2500 lines record. How can I set max_lines to 3500 and receive the whole record in logstash?

It's important to mention that exclude_lines is applied after the multiline. So it seems if the complete multiline is read in, it contains your exclude pattern and is dropped. Try to remove or modify your exclude pattern.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.