Late response on elasticsearch hit


(Akhilesh Anb) #1

I have an index named "akhil". I'm trying to search as

GET localhost:9200/akhil/_search

It's taking a 20.4sec time to get the response.

I want to know why this is happening. How can I debug this?


(Akhilesh Anb) #2

@warkolm please help me


(Christian Dahlqvist) #3

Please provide more details. What search are you running? How much data are you querying? How many shards/indices are you querying? What is the hardware spec of your Elasticsearch cluster? Which version of Elasticsearch are you using?


(Akhilesh Anb) #4

We are using ES 2.4.1
It's a 3 node cluster
We are querying on one index which contains 200Gb of data.
Even when querying on a simple index with 4mb it's taking 2 sec of time to get response. the response times were in the range of few millisecs (1-20)before, now there are responses above 5 seconds,


(Christian Dahlqvist) #5

What has changed since you were experiencing low response times? What is the hardware spec of your Elasticsearch cluster?


(Akhilesh Anb) #6

Centos, 8 GB ram, 500GB disk


(Akhilesh Anb) #7

I executed

GET /_nodes/hot_threads

Why i'm seeing these errors? Any problem in this? and output is changing everytime

::: {NODE_2}{kweNa7fvRk2SjPyFoGR7PA}{3.3.87.246}{3.3.87.246:9300}{master=true}
   Hot threads at 2017-02-02T14:37:11.611Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
   
    0.0% (100.5micros out of 500ms) cpu usage by thread 'elasticsearch[NODE_2][transport_client_timer][T#1]{Hashed wheel timer #1}'
     10/10 snapshots sharing following 5 elements
       java.lang.Thread.sleep(Native Method)
       org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
       org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
       org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
       java.lang.Thread.run(Unknown Source)

::: {NODE_3}{bwtnkXdsR3y6EpKBaNWaRw}{3.3.87.247}{3.3.87.247:9300}{master=true}
   Hot threads at 2017-02-02T14:37:11.981Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

::: {NODE_1}{MoO58zcVSQOJ1jVrYPtegg}{3.3.87.245}{3.3.87.245:9300}{master=true}
   Hot threads at 2017-02-02T14:37:11.985Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

(Akhilesh Anb) #8

After few minutes im getting this output

> ::: {NODE_2}{kweNa7fvRk2SjPyFoGR7PA}{3.3.87.246}{3.3.87.246:9300}{master=true}
>    Hot threads at 2017-02-02T14:42:16.648Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

> ::: {NODE_3}{bwtnkXdsR3y6EpKBaNWaRw}{3.3.87.247}{3.3.87.247:9300}{master=true}
>    Hot threads at 2017-02-02T14:42:16.972Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

> ::: {NODE_1}{MoO58zcVSQOJ1jVrYPtegg}{3.3.87.245}{3.3.87.245:9300}{master=true}
>    Hot threads at 2017-02-02T14:42:16.977Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

(Akhilesh Anb) #9

Updated the latest error


(Christian Dahlqvist) #10

You said you previously experienced good performance. What has changed since you were experiencing low response times? Is it data volumes? Type of queries? The number of users?


(Akhilesh Anb) #11

We are facing latency issue with match_all query. Users are facing slow response upto ~8 seconds from the application.


(system) #12

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.