Superlong lines are not processed

Hi everyone,

We have put ELK stack in place for out monitoring and central logging mechanism. We are happy with common apache, nginx log shipping and storing. Also, for some of out services we generate manual logs. In particular, for a series of APIs we generate a custom log which contains, among a lot of other things, the actual queries that come out of our ORMs as a result of API calls. Today I noticed that some events which are super super long, because of gigantic queries, are not processed. I checked it out in repydebug and saw them being split like in two event lines, which end up in exceptions because of the text being mal-formatted. While this(finding gigantic query cases) is exactly one of the whys we are generating manual logs, unfortunately logstash seems to be having problem with parsing and shipping them. Thus, I'm reaching out for help.
Note that I did not use multiline because those events are not multiline, not only semantically but also syntactically. The big lines are saved in log files as a single line like other lines.

Any ideas?

Thanks in advance.

How are you getting data into logstash? What does the input configuration look like?

This is it:

input {
file {
path => "/address/to/file"
type => "json"
codec => "json"
start_position => "beginning"
}
stdin{}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.