Hello,
I'm facing with a problem I can't get rid off of it.
I'm running a two nodes cluster Elasticsearch, 1 master + 1 node. Everything is running smoothly and all the indices are green, up and running (though no replica right now).
My current elasticsearch.yml configuration is:
path.data = /path/to/data
However I wanted to add an additional path (LVM volume) to expand Elasticsearch's disk size. I did shut down the ES data node,
curl -XPUT 'http://localhost:9200/_cluster/settings' -d '{"transient" : {"cluster.routing.allocation.enable" : "none"}}'
curl -XPOST 'http://localhost:9200/_cluster/nodes/_local/_shutdown'
then I changed the elasticsearch.yml conf file as follows:
path.data = ["/path/to/data", "/path/to/newdata"]
And I restarted the data node followed by:
curl -XPUT 'http://localhost:9200/_cluster/settings' -d '{"transient" : {"cluster.routing.allocation.enable" : "all"}}'
The cluster immediately turned red with all the shards unassigned.
I did shut down again the node, removed the second path, restarted the cluster and everything went green again. Note that ElasticSearch correctly detected the new data path and indeed the global disk space was the sum of the two folders.
How can I add a second path to the ES data node to increase disk space and having ElasticSearch correctly recognizing it?
Many thanks in advance for your help!