Elasticsearch Search Qurery Taking Time

Hi,

I am facing problem in my Elasticsearch in production envoirnment.We currently have 3 Elasticsearch nodes in production with 27GB of memory for each instance for a total of 81GB of memory for the entire cluster. Of that 81GB of memory about 4% of it is dedicated to the filter cache.
My Search Query is taking so much of time.

Index Activity
Indexing - Index: 0ms 1.43ms 1.67ms 1.93ms
Indexing - Delete: 0ms 0ms 0ms 0ms
Search - Query: 0ms 4172.88ms 4018.64ms 5483.46ms
Search - Fetch: 0ms 6.36ms 8.09ms 8.69ms
Get - Total: 0ms 0.49ms 0.58ms 0.7ms
Get - Exists: 0ms 0.49ms 0.58ms 0.7ms
Get - Missing: 0ms 0.19ms 0.19ms 0.32ms
Refresh: 0ms 43.87ms 50.41ms 57.94ms
Flush: 0ms 91.95ms 97.73ms 109.56ms

please Guyz help me out.

Have you looked at hot_threads? Are you monitoring your cluster for things like load, cache use, GC etc?

Thanks for reply,

My Field Evictions is more.

Cache Activity
Field Size: 0.0 0.0 0.0 0.0
Field Evictions: 0 45,309 50,100 52,767
Filter Cache Size: 0.0 1.4GB 1.4GB 1.4GB
Filter Evictions: 0 per query 0.2 per query 0.2 per query 0.2 per query
ID Cache Size:
% ID Cache: 0% 19.9% 21.4% 16.3%

& my GC is Perfect.

Memory
Total Memory: 14 gb 27 gb 27 gb 27 gb
Heap Size: 7 gb 14 gb 14 gb 14 gb
Heap % of RAM: 50.9% 50.8% 50.8% 50.8%
% Heap Used: 13.9% 58.8% 50% 59.9%
GC MarkSweep Frequency: 0 s 0 s 0 s 0 s
GC MarkSweep Duration: 0ms 0ms 0ms 0ms
GC ParNew Frequency: 0 s 0 s 0 s 0 s
GC ParNew Duration: 0ms 0ms 0ms 0ms
G1 GC Young Generation Freq: 0 s 0 s 0 s 0 s
G1 GC Young Generation Duration: 0ms 0ms 0ms 0ms
G1 GC Old Generation Freq: 0 s 0 s 0 s 0 s
G1 GC Old Generation Duration: 0ms 0ms 0ms 0ms
Swap Space: 0.0000 MB 0.0000 MB 0.0000 MB 0.0000 MB

Check hot threads as well.

Which version of Elasticsearch are you using? How much data do you have in the cluster? What is the average and max shard size? What is the specification of your hardware?

I am using 1.7.4 Es version & i have 20 shards & 1 replica set & I have 3 data node & 1 master node.
I have 20cr data in my Es. I have 3 Elasticsearch nodes in production with 27GB of memory for each instance for a total of 81GB of memory for the entire cluster. Of that 81GB of memory about 4% of it is dedicated to the filter cache.
But my CPU is of 4-core but utilization is 92% & 14GB is still left in memory.

I do not understand. How large are your shards in GB? How much data do you have in them?

I have total 20 crore data in my Es. And on each Node i have 27GB data.

Please use units that are well recognised worldwide. It will make it a lot easier for people to understand and help out. I now understand that you have 200 million documents indexed. Is it correct that you have 27GB of indexed data and 27GB or RAM on each node in the cluster?

What type of queries are you running that is taking that long?