I am facing a problem with my ES cluster, version 1.7.3
I have 10 nodes, each having 30g RAM.
Scatered around thos nodes are 2480 shards.
When I start the cluster, the heap usage skyrocket to 27-29g, with absolutely no cache being used. I'm not doing any indexing, nor do I search or aggregate on my data.
When I look at my nodes's stats, I can see that the old gen memory uses more than 28g.
If I start indexing, my cluster ends up collapsing due to increased time doing garbage collections that do not free any memory anyway.
I do not understand why I use so much memory.
Any insight would be greatly appreciated.
PS: please, be patient with my approximative english.
And yes .. I know I should upgrade, but frankly, If I had any say in the matter, I'd upgrade all the way to 2.3.
Anyway, I have news .. the memory usage plummeted to nice and good level after a while doing absolutely nothing on the cluster.
Since my nodes are running on virtual servers, and since one of the physical server crashed during some heavy indexation(making us lose a bunch of our nodes all at once), I suppose that a lot of memory was used on startup to get back to a stable state. (for the translog files or whatnot)
So the issue seems to have solved itself.
Anyway, thank again !
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.