Elasticsearch - kubernetes - EFS -AWS - Node lock

Hello Team,
I have deployed ES 6.6 in kubernetes.
I have mounted an existing Persistence Volume EFS for the data node.
I'm getting the following error:

org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?

How do I resolve this?

node.max_local_storage_nodes defines how many nodes are allowed to share the same data path. It is set to the default of 1, but you are trying to run more than one node on this path.

Is your EFS mount is shared between multiple machines all writing to the same data path? Elasticsearch uses filesystem-based locks to prevent more than one node accessing the same path at once, but I don't know whether EFS implements locking strictly enough to guarantee this. It's probably best to give each node its own data path to avoid any issues with concurrent access.

Thanks David.
I'm trying to setup dynamically scalable ES on Kubernetes cluster.
Yes, the data directory is the same for all the data nodes. If in case each data-node is given different data paths, how does it ensure data is available all over the cluster, if a pod fails?

I can't speak to Kubernetes in particular, but in general the preferred model is for each node to store its data on a local disk, and to rely on Elasticsearch to ensure that the data is properly replicated. This means that if you lose a node then you just start up another one and Elasticsearch rebuilds the data from the other shard copies as needed.

The official Elasticsearch Helm chart uses volume claims to achieve this, I think.

Hello David,
It worked when I configured each data nodes with their own data paths..
Thank you..

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.