I am trying to send a bulk of 10k requests of 1.5 kb each using python elasticsearch from 20 separate client threads to a single ES node. I am seeing that my requests timed out with the following error. ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'xx.xx.xx.xx', port=9200): Read timed out. (read timeout=30))
So, I changed the client default timeout value to 30s. Now, requests are timing out after a long time.
a. Is there any deterministic way of calculating bulk size if my requests are of same size throughout?
b. Are there any metrics or stats that can indicate the bulk index failure in the future? Can I query them via api ? Which cluster parameters are correlated with this?
c. I was verifying the thread pool queue size when the bulk index request failed. Queue size was around 300 while my queue is set to 1000. I assumed that bulk request failed because of queue getting full. But that was not the case. Does 1 bulk request(of X docs) occupy 1 spot in the queue?
In order to improve search performance, I am planning to use routing for fields. I don't see any way to put routing value in kibana. What is the best way to use routing in kibana? Also, I noticed that kibana always use search api . In this case, does routing actually matter? (since all shards have to be queried for all requests from Kibana) Please correct me if i am wrong
As each bulk request is around 15MB, I suspect you are overloading the cluster with that many concurrent requests as it is only a single node. I would recommend trying with smaller bulk sizes, e.g. 1000-2000 documents per request, and/or reducing the number of concurrent indexing threads. Once you have reached a level where you can index without timeouts you can start slowly increasing parameters until you find an optimum. There is no point throwing that much data at Elasticsearch if it is not able to process it in a timely manner. You can naturally also scale up and/or out your cluster to make it more performant.
Not sure I understand what you are looking for. Can you please clarify?
Have you got X-Pack monitoring installed? This will allow you to see what goes on in the cluster.
Is there any deterministic way of calculating bulk size if my requests are of same size throughout
I meant, Can I calculate the optimal bulk size using any formula(using parameters like queue size etc) if my request size is always same. Since, no parameters are changed dynamically during my benchmark run, I don't understand why requests are timed out after 1-2 hours.
Have you got X-Pack monitoring installed? This will allow you to see what goes on in the cluster.
Yes, I have X- Pack installed. But, I couldn't find any correlated metrics in the displayed graphs that exactly shows the reason why requests are timing out. When I run my application, I want to detect beforehand if bulk request will timeout or if I am reaching the limits of ES. Can you help me in finding metrics or stats that can alert me this situation so that I can take necessary action(reduce the load)?
Are Index memory and segment counts metrics important here?
Benchmarking is the best way to determine this. Is there anything in the logs, e.g. related to GC or merging activity, around the time it slows down? Are you supplying your own document IDs or are you allowing Elasticsearch to assign them?
@Christian_Dahlqvist
No I couldn't find anything specific. In the monitoring display, total index memory is shown to 143 MB while total segments count per node is around 1000. Machine has 32 G heap and 8 core cpu.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.