Enlarge Cluster with new nodes

Hi all
We are planing to enlarge our cluster with 16 nodes in the end.
the first 16 nodes are 4 years old, but still healthy and good enough!
But as you can imagine, 4 years ago, we added 4TB Disk space in each node.
now, we can do the same - 4TB per node - but from my point of view, this is not the way to go. We need current hardware.
I know from several postings, different disk sizes are not the way to go!
What is a good way?

allocation filtering?
Or i can build the new nodes with, let's say 8TB and create partitions with only 4TB. As long as the Old nodes are fine, we can add on this nodes 8TB as well and increasing the storage server by server.

Thank you very much for your input!
BR A

Ultimately Elasticsearch will balance by shard count, and disk size.

Allocation filtering will work, you could make the new ones cold nodes to store more data.

if I were you then
I will just use new node with faster disk as data only node. add them to cluster and slowly remove existing node function from data to master or cold storage only.

I even will add 4x1tb SSD in a node rather then one 4tb disk. ( permitting disk allocation allowed in hardware)

Hi @warkolm
Thank you vor your reply. I think i will use the Allocation Filter based on hostname. so i can define which index will be stores on which nodes.
If we plan to upgrade the storage from the "older" nodes, i can set the shard allocation (on index settings) on a per node basis, where the shards are stored

You should probably abstract that to a different level, otherwise that's a lot of management.

What do you mean with "abstract that to a different level"?
Yes, with allocation filter based on Hostname, i have to sett the right hostname in each index Settings... Can be scripted, but yes, a lot of work

Why not just use tags like big_disk and small_disk and use it that way.

You mean on old nodes node.attr.size: small_disk and on new nodes node.attr.size: big_disks
At the end, my node configuration (elasticsearch.yml) should look like this?

node.attr.rack: north
node.attr.size: small_disks
node.attr.box_type: hot

But in any case, i have to set theese small_disk and big_disk settings on all indices. Right?
According to the documentation i have to set it like this, individually on all indices:

PUT test/_settings
{
  "index.routing.allocation.require.size": "small_disks"  
  "index.routing.allocation.require.rack": "north"  
  "index.routing.allocation.require.box_type": "hot"  
}

I hope i understand your suggestion correct.
Thank you very much

You may want to merge the concept of small and hot, but that's up to you.

Yes you need to add the allocation tags to the indices. Use index templates, or even better, [ILM]ILM: Manage the index lifecycle | Elasticsearch Reference [7.11] | Elastic) to do this for you.

ILM sound interesting, i have to take a look on it!

As we have hot (NVME Disks) and Warm (SATA) Disks in the current cluster i have to merge the "box size" to it.
The idea behind, to use node name for the index filter allocator was, that i have to change only the settings in each index if a "small_disk" node would be upgraded to a "big_disk" node.
But anyway, if a Disk upgrade occur, i ha to drain the node, replace hardware and restart. In this Process, the node attribute can be changed as well...

Thank you very much for your inputs!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.