Hello:
I have a cluster that's encountering memory pressure. It's predominantly
tuned for write performance (logs). Occasionally we get a query sent in
that sorts the entire dataset by timestamp which explodes our fielddata
cache size. It doesn't lead to OOM errors, but a lot of GC churn so we
have to go in and manually clear the fielddata caches. I want to set the
indices.fielddata.cache.size to something (let's say 30% for the sake of
argument). My question is... what will happen if the size of the fielddata
cache required for that query is more than the 30% I've allocated for the
node? Initial tests seem to indicate that things "just work" (I get a
response that looks valid, the caches are kept in check, etc) but I can't
really validate that the results are properly sorted etc (too much data).
I know we really need more memory and/or more nodes. Just thought I'd ping
the experts if they know for sure what to expect....
Thanks!
Andy O
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.