Elastic Search Load Testing with Java Client

Hi,

I have used elastic search 7.X to index my database data. I have writtend geo spatial query using Java High Rest client to search from elastic search.

The Java client works fine with fewer request but When I perform the load testing with 1k request thread are getting deadlock and do not get search response.

AWS Elastic Search Service Configuration
1 Cluster
1 Node
1 Indice with default configurations

when I ran the thread dump I saw the below stack trace
.

"http-nio-8080-exec-11735" #11918 daemon prio=5 os_prio=0 tid=0x00007f4902cc3000 nid=0x7624 in Object.wait() [0x00007f4463b10000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at org.apache.http.concurrent.BasicFuture.get(BasicFuture.java:82)

  • locked <0x00007f51e780c5b8> (a org.apache.http.concurrent.BasicFuture)
    at org.apache.http.impl.nio.client.FutureWrapper.get(FutureWrapper.java:70)
    at org.elasticsearch.client.RestClient.performRequest(RestClient.java:244)
    at org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
    at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1611)
    at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1581)
    at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1551)
    at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:1067)
    at ws.sdev.jahez.elasticsearch.RestaurantFinderServiceImpl.getNearestRestaurants(RestaurantFinderServiceImpl.java:100)
    at ws.sdev.jahez.elasticsearch.RestaurantFinderServiceImpl.getNearestRestaurants(RestaurantFinderServiceImpl.java:107)
    at com.mapview.controller.MobileAPIController.m_getNearestRestaurantV3(MobileAPIController.java:1549)
    at sun.reflect.GeneratedMethodAccessor357.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

What does the query look like?
What is happening on the nodes when you run this test, are you using Monitoring in Kibana?

This doesn't look like a deadlock, it's just waiting for a response. All(*) requests eventually receive a response, it just might take a while.

(*) technically you need to enable TCP keepalives to guarantee this.

Every cluster has a limit to what it can handle and it seems you have a fairly small cluster. I would recommend starting with a small number of threads and then gradually increase it until the cluster is no longer able to provide the response times you need to support. That way you will know how many concurrent queries your cluster can support for that combination of query and data. Also be aware that using a lot of threads can result in a lot of context switching and could introduce limits in the client you are using to benchmark the cluster. Please read this blog post and watch the linked talk for some guidance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.