Hello,
We are trying to replace our VM nodes with bare metal. We keep data for max 2 weeks. Is there a way to add new bare metal nodes to the cluster and use only those for adding new indices? As 2 weeks pass by, we can remove all indices on VM nodes and shut down those nodes. From what I've read, excluding a node with "cluster.routing.allocation.exclude._ip" will cause shards to be moved from it, which we'd like to avoid (why bother moving a lot of data when it can all be removed 2 weeks later?).
Thanks
You can use shard allocation filtering, assign labels to your nodes (requires restarting the nodes to take effect) and matching labels to your indices, so that e.g. only old indices will go to old nodes and new indices will only go to new nodes.
Cheers
Luca
Thanks Luca.
So I guess I'd say something like this for 2 vm nodes and 2 bare metal nodes:
node.label: vm1
node.label: vm2
node.label: bm1
node.label.bm2
Indexes are created every hour. In my index templates setting I'd add "index.routing.allocation.include.label" : "bm1,bm2"
Now how would I prevent any allocation going to vm nodes?
I can create another attribute , let's say called server_type that would have a value of vm or baremetal and use
cluster.routing.allocation.awareness.attributes: server_type with cluster.routing.allocation.awareness.force.server_type.values: baremetal
or if I use allocation.aweness like this, I don't even need to label each node differently? Still don't know how to prevent any allocation during rebalance or recovery to go to vm nodes.
I am VERY new to ElasticSearch so I apologize if i am asking basic questions.
Thanks!
Why not just add the new nodes, then use the exclude IP routing to everything off the VMs to the physicals and then remove the VMs altogether? Seems a lot simpler
Because I want to avoid moving data (a LOT of it). I'd rather not move off VMs, but just leave what's there, and once it's not needed any more (2 weeks later), delete it.
This is all you need to do