NAS / NFS mount - Failed to obtain lock

This ELK stack in build on Docker containers with elasticsearch, logstash and kibana as a service in a docker-compose.

‘There is a issue in mount the data volume of elasticsearch to an external NAS’

When I bind the volume from the container to the data folder, I don't have no issue. It works very fine.

Whereas if I bind the volume to /data folder which is mounted to a NAS drive, I am getting the following exception
The thing is that I am using external disk for storing the data due to the limitations on boot drive. In my case /data directory is a mounted drive location.

[2017-12-25T23:01:43,373][INFO ][o.e.n.Node ] [] initializing ...
[2017-12-25T23:01:43,651][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/docker-cluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:127) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:58) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.6.3.jar:5.6.3]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/docker-cluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:260) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.node.Node.(Node.java:262) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.node.Node.(Node.java:242) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Bootstrap$6.(Bootstrap.java:242) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.6.3.jar:5.6.3]
... 6 more
Caused by: java.io.IOException: failed to obtain lock on /usr/share/elasticsearch/data/nodes/0
at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:239) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.node.Node.(Node.java:262) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.node.Node.(Node.java:242) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Bootstrap$6.(Bootstrap.java:242) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.6.3.jar:5.6.3]
... 6 more
Caused by: java.io.IOException: No locks available
at sun.nio.ch.FileDispatcherImpl.lock0(Native Method) ~[?:?]
at sun.nio.ch.FileDispatcherImpl.lock(FileDispatcherImpl.java:90) ~[?:?]
at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1115) ~[?:?]
at java.nio.channels.FileChannel.tryLock(FileChannel.java:1155) ~[?:1.8.0_131]
at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:114) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]
at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]
at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]
at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:226) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.node.Node.(Node.java:262) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.node.Node.(Node.java:242) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Bootstrap$6.(Bootstrap.java:242) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.6.3.jar:5.6.3]
... 6 more

The version of elasticserach is 5.6.3
The cause is "java.io.IOException: No locks available"

The segment of the docker-compose for elasticsearch is as follows:

version: "3"
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
ports:
- "9999:9200"
expose:
- 9200
volumes:
#This NAS mountpoint
- /data:/usr/share/elasticsearch/data:z
environment:
- xpack.security.enabled=false
- xpack.watcher.enabled=false
networks:
br0:
ipv4_address: 192.168.200.4
restart: unless-stopped

The root cause of the issue is
Caused by: java.io.IOException: failed to obtain lock on /usr/share/elasticsearch/data/nodes/0

NAS (Network Attached Storage) was mounted thru NFS from the server.
The NFS protocol does not support file locking, where as the elasticsearch to store the index needs file lock.

Any thoughts to resolve this?

You can try setting node.max_local_storage_node to more than 1.

I tried setting node.max_local_storage to 3. I still have the same error

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.