I have been running a cluster of ES servers (5) with 48GB of RAM. They have
32 GB allocated to the ES process and are configured to use a static
maximum 60% of that RAM for the index field cache. They have been running
with this configuration for weeks.
I have a jmx ganglia plugin that shows the servers hovering right around
28GB of JVM Heap usage for the last few weeks.
This morning the memory the leader of the cluster spiked to fill the entire
JVM allocated size (32 GB) and then further spiked to 35GB a few minutes
later (I am guessing this is while it was writing the heap dump).
The odd thing is that the amount of data being indexed at the time was
below the average that we had been indexing during the last few weeks. As
far as I can tell, there were no unconventional index or search requests at
The logs don't show anything helpful - just a bunch of GC's starting at the
memory spike time and then a bunch of out of memory exceptions (nothing
Given that the index field cache size is fixed (and I have verified this
with various tests over the last few weeks), I am not sure what is causing
this sudden spike in memory.
Could optimizing or flushing cause such a spike?
This is what the top few largest memory objects from Eclipse MAT look like:
Thanks in advance for any help,
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
For more options, visit https://groups.google.com/groups/opt_out.