Logstash decrease write rate when rabbitmq full

Hi all,

I use logstash 2.4 to forward docs from RabbitMQ to Elasticsearch 2.4, I see correlation between how much the queue is full to logstash write rate. When the queue gets full the write rate significantly decrease and vice versa, did someone here encounter this issue or have a suggestion why might it happen?

logstash servers are idle, ES looks OK ...

input {
rabbitmq {
host => "host_name"
queue => "queue_name"
exchange => "exchange_name"
codec => "json"
durable => true
prefetch_count => 1000
threads => 8
}
}

output {
elasticsearch {
hosts => [ "elasticsearch24-hostname" ]
index => "events-%{+YYYY-MM-dd}-v%{schema_version}"
routing => "%{org_pk}"
document_id => "%{doc_id}"
flush_size => 1000
idle_flush_time => 2
document_type => "c_levent"
}
}

I would expect the indexing speed of Elasticsearch to determine the throughput here. If Elasticsearch is not able to keep up the queue in RabbitMQ will grow. As you are assigning document IDs at the application side, it is possible that the indexing throughput in Elasticsearch will deteriorate as the size of the index being written to increases, but this depends on how how large the indices grow to and how the ID is created. Are you seeing any correlation between the indexing throughput and point in time when a new time-based index is created? Do you have monitoring installed?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.