Filebeat -> logstash ordering

i have a stream. filebeat->logstash->(s3, kafka)
sometimes, it does not guarantee ordering.
what is the checkpoint to solve this problem?

# filebeat conf
input {
    beats {
        port => 5100
        type => "rhyme"
        congestion_threshold => 60
    }
}
filter {
    if [type] == "rhyme" {
        grok {
            match => ["message", "%{DATA:topic_id}\|%{DATA:message_id}\|%{GREEDYDATA:body}"]
        }
    }
}
output {
    if [type] == "rhyme" {
        s3 {
            access_key_id       => "???"
            secret_access_key   => "???"
            region              => "???"
            bucket              => "???"
            time_file           => 60
            prefix              => "rhyme-"
        }
        kafka {
            topic_id            => "%{topic_id}"
            message_key         => "%{message_id}"
            codec               => plain {
                format => "%{body}"
            }
            bootstrap_servers   => "internal.kafka:6667"
        }
    }
}
#logstash conf
filebeat:
  # List of prospectors to fetch data.
  prospectors:
    -
      paths:
        - D:\*.log
      input_type: log
      document_type: rhyme
      ignore_older: 2h
output:
  logstash:
    # The Logstash hosts
    hosts: ["internal.logstash:5100"]

logging:
  level: debug

  # enable file rotation with default configuration
  to_files: true

  # do not log to syslog
  to_syslog: false

  files:
    path: C:\Program Files\Filebeat\log
    rotateeverybytes: 10485760 # = 10MB
    name: beat.log
    keepfiles: 7

The Logstash pipeline is multi-threaded by default for performance reasons, and this means there is no order guarantee. The only way to make sure that all events are processed in order is to reduce the number of worker threads to 1, but this will severely limit throughput.

Thanks for answer.

how can i set worker thread to 1?
is it -filterworkers option?
isn't the default 1?

ah... default value changed to the number of cores.
i'll try --pipeline-workers option.