Chatted with dakrone in IRC and wanted to copy the notes here.
On Thursday, March 27, 2014 3:46:39 PM UTC-7, schmichael wrote:
I was surprised to find recently that
while indices.fielddata.breaker.limit defaults to 80% of the
heap, indices.fielddata.cache.size is unbounded.
Is there ever a case where you would want breaker.limit < cache.size?
If the breaker isn't properly estimating cache usage. Since the breaker
does an initial usage estimate, the actual amount of cache the resultset
takes may vary from the estimation.
The other case depends on how you want ES to behave. If you prefer
slow-but-finishes over fast-but-can-fail, then always set the cache size
lower than the breaker. If you'd rather have pathological queries trip the
breaker and keep things fast, then don't worry about setting the cache size
and instead just set the breaker.
Is there any reason to cache.expire?
Didn't really get a compelling reason for this. Seems minor if it matters
Under what circumstances would you adjust the breaker.overhead?
If you find that the actual fielddata cache usage exceeds your breaker
limit without actually tripping the breaker, that means the breaker isn't
estimating cache usage properly. You can adjust the overhead to try to make
it more accurate.
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/bbb02948-c8b9-4d89-9a73-bc54855e51c6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.