Kibana field_stats request takes 20000 ms if number of Indices and shards are more

Hi Team , i am getting high response times for the field stats request , if i am having around 240 shards with daily indices . Can you please suggest how can i reduce these response times with that many number of open shards.

How much heap does ES have?

ES has 7 data nodes with 30 GB of heap every node.

How are you measuring this latency? Is this just for the field_stats call or for the following aggregation as well?

Its just the Field Stats , I tested multiple options like kept all the 240 shards opened in memory and executed the request from Kibana for the default 15 mins results .Then I closed certain Indices and then again reloaded the kibana UI and checked the field_stats response in Network sniffer in Developer Tools of Mozilla Firefox .

That sounds odd. What is the load on the cluster? Are you seeing a lot of long GC? Which version of Elasticsearch are you on?

I am using ES 2.1 , Kibana 4.3.0 , Load on the cluster is like i am indexing around 25-30 GB of data daily in Elasticsearch and I have not yet analysed the GC runs rather i have tried to reduce the number of shards per Index , earlier I was using 5 shards per Index and now I have reduced it to 1 Shard per index and tried creating Daily Index.
If I have 240 Open shards also , this issue should not come ? I thing I am using NAS storage . Would it cause an issue in field_stats response ?

Hi team,
Please suggest what would be the option to resolve the slowness of field_stats.

Hi Team, Please suggest what could cause the field_stats to be slow with more number of Shards.

What sort of NAS storage is this?

It is nfs4 storage.

But are you mounting something locally that ES then runs on?

No I have not mounted anything locally. When I reduced the shards from 240 to 50 , field stats response came down drastically from 20000 ms to 2700 ms

Then how is it setup?

oh sorry i thought you were asking if i mounted anything else apart from my nfs . I have mounted the NFSshare on my hypervisors and my Elasticsearch is deployed in VMs created by those hypervisors and Elasticsearch is using the same nfsmount for storing the indices.

I've recently opened an issue on Github to dig into this (https://github.com/elastic/kibana/issues/9386) .. we're seeing the same issue, but its upwards of 35s to return the query. I strongly suspect that its our "warm" data nodes that hold the majority of our data, but is old "closed" indexes. I think that they're slower to respond, and this query requires all the nodes in the cluster to respond it seems.

Thanks for the reference , i will try to follow github issue .