Memory utilization - predicting 'out of heap space' errors

I've noticed search performance on some of the larger indices take
8-10 seconds. For example on an index with 160 million documents @
220gb size (not counting replication) w/15 shards, a text phrase
search on a single field consistently took ~8500 milliseconds
(changing the actual phrase to prevent caching results). There are no
other clients on the cluster. The search query is a single phrase
query with a range filter on a field called postdate (date of the
document). No facets, no sorting. The query type is query_and_fetch.

Try using a numeric_range filter for postdate instead. numeric_range is
good for fields with many many unique terms (such as a datetime would
have).

clint