I know the general advice is to keep hardware as homogeneous as possible. However...
I have an existing 5 node ES cluster (v1.7.5) where each node has 32GB of RAM. ES gets 14GB of that for heap.
I would like to (for various reasons) add 2 additional nodes, with the only difference being that they have 64GB of ram. I would plan on making the heap size 30GB on the new nodes, and they would also be version 1.7.5). The cluster would then have 7 nodes total. 5 at 32, and 2 at 64.
I am not currently seeing memory issues, but the data is growing and I want to get out ahead of things. I would also like to soak test this new hardware configuration without committing to it fully. What kind of negatives could happen if I added 2 additional nodes that have double the RAM/heap? I'm fine letting ES balance the data across all nodes equally - I just want to make sure something terrible won't happen.
Ultimately we want to move all nodes to 64GB. However, we'd rather introduce the nodes slowly to the cluster for a few reasons. One, we are trying new firmware in our RAID card, and want to make sure it doesn't cause issues. Two, the way our datacenter provisioning works, if we buy all 5 nodes at once, they have a likely chance of ending up on the same rack in the DC. We do not want that, and the only knob we have to turn on rack location is the date which be buy the machines. Third, we are unsure, given our workloads, if the 30GB heap will actually help as much as we think. For those reasons we want to slowly add the nodes.
So basically, I am fine if the data is spread evenly, so long as the even spread doesn't assume all the boxes are 64GB RAM. This will be a temporary situation lasting a month or two at most. What is the worst that can happen in this case?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.