How to process messages strictly in the order they arrive?

Hi everybody. I've got an inputfile that contains data in the following format (one json object per line)

{id: "1", status: "Running"}
{id: "1", status: "Finished"}

I'm shipping this via Filebeat to Logstash and then push it further to Elasticsearch - essentially done like this

input {
  beats { port => 5044 }
}

filter {
  json {
    source => "message"
  }
}

output {
  elasticsearch {
    document_id => "%{[id]}"
  }
}

The problem I've got is, that in the case that if related log entries (correlated by id) are written just after each other (which happens all the time), then sometimes the "Finished" entry seem to overtake the "Running" one.

I'm quite puzzled and already did the following analysis:

-) Looked at the filebeat logs and it seems that filebeat sends lines in the correct order
-) Replaced elasticsearch output through stdout and everything looks like to be in order
-) !! added stdout output additionally to elasticsearch and - voila! - messages are shown in stdout in the wrong order

In reality, my logstash configuration adds some more filters and a few conditionals as well as some other inputs not related to the affected messages.

If I can't rely on the order of incoming messages, how do I solve such a problem ?

Thanks!
Peter

This is likely caused by a race between the pipeline workers. Start Logstash with -w 1 to run a single pipeline worker and the problem should disappear.

Thanks! I'm giving it a try. I've already played around with workers - but only with the property that can be set for the various plugins. All of them are unset in my configuration (which should default to 1 regarding to the documentation). Is there a difference between this and setting -w 1 for the complete process ?

I don't recall exactly how it works and it might vary between different Logstash releases. There was a blog post just a few days ago that should explain things: https://www.elastic.co/blog/a-history-of-logstash-output-workers

Thank you. It indeed looks like it solves my Problem!