Sorry for bumping this, but I'm a little stumped here.
We have some nodes that are evicting fielddata cache entries for seemingly
- we've set indices.fielddata.cache.size to 10gb
- the metrics from the node stats endpoint show that the
indices.fielddata.memory_size_in_bytes never exceeded 3.6GB on any node.
- the rate of eviction is normally 0, but goes up above that eventhough
the fielddata cache size is nowhere near 10GB
Attached is a plot of the max(indices.fielddata.memory_size_in_bytes) (red
line) and sum(indices.fielddata.evictions) (green line) across all nodes in
the cluster. Note that we create a fresh new index every day that replaces
an older one (that explains the change in profile around midnight).
As you can see, the size (on any given node) never exceeds 3.6GB, yet even
at a lower value (around 2.2GB), some nodes start evicting entries from the
cache. Also, starting around Tue 8AM, the max(field cache size) becomes
erratic and jumps up and down.
I can't explain this behaviour, especially since we've been operating for a
while at this volume and rate of documents. This was not happening before.
Though it's possible that we're getting a higher volume of data, it doesn't
look substantially different from the past.
Under what circumstances will an ES node evict entries from it's field data
cache? We're also deleting documents from the index, can this have an
impact? What other things should I be looking it to find a correlation (GC
time does not seem to be correlated)?
On Friday, September 12, 2014 9:33:16 AM UTC-4, Philippe Laflamme wrote:
Forgot to mention that we're using ES 1.1.1
On Friday, September 12, 2014 9:21:23 AM UTC-4, Philippe Laflamme wrote:
I have a cluster with nodes configured with a 18G heap. We've noticed a
degradation in performance recently after increasing the volume of data
I think the issue is due to the field data cache doing eviction. Some
nodes are doing lots of them, some aren't doing any. This is explained by
our routing strategy which results in non-uniform document distribution.
Maybe we can improve this eventually, but in the meantime, I'm trying to
understand why the nodes are evicting cached data.
The metrics show that the field data cache is only ~1.5GB in size, yet we
have this in our elasticsearch.yml:
Why would a node evict cache entries when it should still have plenty of
room to store more? Are we missing another setting? Is there a way to tell
what the actual fielddata cache size is at runtime (maybe it did not pickup
the configuration setting for some reason)?
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/eafa1b0a-dbd6-4127-94d5-3733a3067bc7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.