What is the difference between evicting field data, and triggering the breaker?

We recently had a field data issue at my work, and came across these two (I assume common) options for managing it on your cluster:

• indices.breaker.fielddata.limit 60% # The actual limit, after which you can no longer add values to your field data cache

• indices.fielddata.cache.size: 20% # The limit at which your cluster will intentionally start evicting values from your field data cache, in order to make room for new ones, and keeping your memory footprint low enough that you wont trip the above breaker limit

First of all, did I understand those two correctly??

My question is this, what are the implications, performance-wise of surpassing each of those limits??

Crossing the breaker limit means no more new values get to go in the field data cache - which means that your queries will be slower because they can't check field data??

Crossing the cache size limit means that old values will begin to be evicted, which means more I/O on your machine - which means your queries will be slower??

Those implications are the piece I'm not perfectly clear on. Any more in-depth explanations would be greatly appreciated!!!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.