I have 795GB index (5shards, 1replica), 783million documents. Each record has about 35 fields. When I do query of the whole thing through kibana I get query results back in about 90seconds. My query/kibana node has 16GB memory allocated for elasitcsearch. I was wondering, other than trying to reduce my index size, I wanted to understand how index size on disk is translated into query result size in memory in query node and passed onto kibana. What kind of hardware adjustments, scaling, caching I could implement to get faster query response.
I am in the process if building a cluster with ES 2.3.4 and kibana 4.5 and logstash 2.4
Thanks in advance