Prior to this issue we had an elasticsearch cluster of 3 master and 2 data nodes. During the setup of those nodes there was no issue setting the ES_HEAP_SIZE and seeing that reflect in the Java min/max memory args on startup. However, this last week we added two more data nodes (new machines) and a client node (existing machine) and immediately started seeing problems with the cluster. After some digging, i found out that setting the value of ES_HEAP_SIZE the same way that was done for the original nodes was not resulting in the JVM arguments being changed so these nodes were running with a minimum of 256m and max 1g. With the original nodes, we were setting the value for ES_HEAP_SIZE in the /etc/init.d/elasticsearch script and this was working just fine. With the new nodes, this setting was not any effect. Doing some research, people suggested that the value should be set in /etc/defaults/elasticsearch. I tried this on the new nodes and that also had no effect. I finally had to directly modify the /usr/share/elasticsearch/bin/elasticsearch.in.sh script to set the JVM memory values directly.
We are using elasticsearch 2.3.3. We install via apt get. We are using ubuntu. all the old nodes are ubuntu 14, of the new nodes, two are ubuntu 16 and one is ubuntu 14 (the problem happens on both). I did notice that the install of ES on the new nodes had slightly different shell scripts than on the old nodes. The differences aren't anything that would be causing this issue but it does lead me to believe that the rpm for 2.3.3 had changed from the first time i installed to this most recent time which was odd to me.
My question is how to get the variable ES_HEAP_SIZE to be used. I would prefer to set this variable in the /etc/defaults/elasticsearch as we are installing via ansible and modifying the init.d or any of the bin scripts really not a great practice.