Read Latency for elastic search


(Neeraj Gupta) #1

We have a use case where we are inserting data in Elastic search cluster at 19-20K QPS. We are not getting any errors while inserting data using bulk processor.

However there are times where we observe read latency of about 5-10mins in Kibana(installed on a separate single client node). I think the cluster is properly scaled since writes are not giving any issues. Another behavior observed is that if the inserting application is started Kibana shows the latest data and the older data gets filled after some time. This essentially proves that data is not available for search.

We are using Elasticsearch version 2.4 and Kibana Version 4.6.2.
Any suggestions for the same. Thanks in advance.


(Christian Dahlqvist) #2

The fact that the cluster is able to handle the write load does not necessarily mean that it is sized correctly. Searching and indexing use the same resources in the cluster in terms of CPU and disk I/O, so it could very well be that you do not currently have enough headroom available to be able to serve searches at an acceptable latency.

What does CPU and disk I/O look like during indexing? How does this change when you process a query through Kibana that is slow? What does the statistics for different visualisations in the dashboard show?


(Neeraj Gupta) #3

@Christian_Dahlqvist Thanks for the info.

Below are the index stats got from elasticsearch-hq plugin (http://www.elastichq.org/index.html)

This probably points out that I/O is the bottleneck even when we are using AWS GP2 SSD EBS volumes. Are there any parameters which we can tweak to optimize the same?


(Christian Dahlqvist) #4

What does CPU and disk I/O look like during indexing? You can get this e.g. via top and iostat. How does this change when you process a query through Kibana that is slow?


(Neeraj Gupta) #5

During the delay observed the cpu is about 15-20% utilized.

avg-cpu: %user %nice %system %iowait %steal %idle
20.41 0.00 1.44 0.85 0.00 77.30

Iostat output is as below:-

Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
dm-0 9063.00 0.00 35.40 0 177


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.