Node.max_local_storage_nodes was set to 3, now can't upgrade to 8.0

So our node.max_local_storage_nodes was set to 3 even though we only need 1 the whole time. Now we are trying to upgrade from 7.17.0 to 8 and it won't allow us even when we remove the node.max_local_storage_nodes from Elasticsearch.yml. Is there a way to resolve this without losing the data?

"It won't allow us" is too vague to be able to give much help unfortunately. How is it not allowing you? Is it reporting an exception or other error? If so, would you share the exact message you're seeing?

The error is

java.lang.IllegalStateException: data path /usr/share/Elasticsearch/data/nodes cannot be upgraded automatically because it contains data from nodes with ordinals [0, 1], due to previous use of the now obsolete [node.max_local_storage_nodes] setting. Please check the breaking changes docs for the current version of Elasticsearch to find an upgrade path

node.max_local_storage_nodes is no longer in Elasticsearch.yml

Ah ok thanks that is clearer. It's complaining that /usr/share/Elasticsearch/data/nodes/1 exists, which means that at some point in the past you did have multiple nodes running. Elasticsearch won't delete this directory since it can't tell that it is now obsolete. The safest fix would be to migrate all your nodes to a new data path (e.g. /usr/share/Elasticsearch/data-2) which will ensure that there's no data leftover from older nodes.

Removing the /nodes/1 directory allowed me to upgrade our test region but i didn't check if we had any data loss by removing that directory. I had saw the /0/ and /1/ both had the same indices folders. I don't know if this structure is caused by having replicas enabled or related to that node.max_local_storage_nodes. We never ran multiple nodes on the same cluster so I was kind of thinking it is having these 2 folders due to replicas. Anyways when I do QA upgrade I will check if we lose any data when i remove the /nodes/1/

The manual strongly recommends not to modify things within the data path. It might be fine, but it might also come back to haunt you later. I'd recommend migrating everything to a separate path instead.

The only reason for Elasticsearch to create a /nodes/1/ path is if there was already a node running against /nodes/0/.

Do you have a link for the proper way to migrate from one path to a new path?
If I just change the to a different path then it'll just be a new path with no data right.

Yes the safe way to do this is to add a new node at the new data path, then set a shard allocation filter to vacate the old node and wait for the shards to finish moving. Once that's done and everything is at green health you can safely remove the old data path.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.