Let´s say that I have a cluster where I´m sending logs line from filebeat to logstash.
My problem is that filebeat since send line by line, and in my logstash I have configure that the input codec is json, my logstash is throwing a _jsonparsefailure since my json as I said in two lines has a wrong format.
Any idea if it is possible send the whole entry log as once and not line by line?. It´s really weird the current behavior of filebeat, I was expecting that If I add a several lines in my log at once, filebeat will send all those at the same time, and not line by line.
Maybe I´m doing something wroing here.
Filebeat generally treats each line in the log file as a separate event, but if you are using the latest version you can configure it to assemble records spanning multiple lines using regular expressions.
most log-shippers work line by line. filebeat has a multiline support for connecting multiple lines into one event.
Can you configure your application to not pretty-print the json?
I was expecting that If I add a several lines in my log at once, filebeat will send all those at the same time, and not line by line.
File systems don't work that way. Filebeat has no way of knowing that the application writing to the log wrote two lines in one "transaction".