File output filter writing logs out of order

This is a continuation from logstash-users; moving here.

On Mon, Jun 1, 2015 at 9:50 AM, Magnus Bäck magnus.back@sonymobile.com wrote:
On Monday, June 01, 2015 at 16:38 CEST,
brandon.metcalf@logicmonitor.com wrote:

We are using pretty trivial logstash setup. Sender configs look
something like

[...]

output {
tcp {
host => "log02.us-east-1.logicmonitor.net"
port => 2009
mode => "client"
codec => "json"
workers => 10
}
}

[...]

output {
file {
path => "/log/%{new-host}/%{path}"
codec => "json"
message_format => "%{message}"
workers => 30
}
}
We are finding that logs entries are being written out of order
(seemingly random) to the destination files on the receiver.

With multiple output workers for both the tcp and file outputs that's
expected. There are no ordering guarantees.

I tested with just one tcp output worker and get the same behavior. I did not change the file output workers. Anything else to look at? Thanks.

I wouldn't expect there to be any ordering guarantees for multithreaded file output workers either.

Ok, thanks for additional reply. Maybe I should take a step back and describe our use case. We simply need a mechanism by which individuals that do not have access to our production systems (and therefore direct access to logs) can tail logs in real time. I increased the workers on our receivers in order to handle the large number of logs coming in which, apparently, has introduced the issue at hand.

Any suggestions on how to best solve this with logstash? We don't index into elasticsearch or anything more advanced because of our basic use case. We do feed our logs into another service for more advanced queries, but some folks like working with and viewing logs in real time.

Thanks again.