We are migrating from an old elasticsearch version (1.4) to the latest (2.3.2). We are doing some tests and I'm having some problems.
Our deployment is a bit unconventional, we have several indexes, each one using a different mount point (when we create a new index we also create a mount point with the same name in the data folder, so that ES can store the data in there). The mount points are local to the server, not remote storage.
This works quite well in ES 1.4, but in the latest version we have issues when we try to delete an index. When we send the delete call to elasticsearch the index folder is still mounted, so it cannot be deleted (you get a resource busy error). In the old version it didn't print any errors, it just deleted the contents, leaving the folder, and continued.
With the latest we get several exceptions:
java.nio.file.FileSystemException: Device or resource busy (which was expected)
and:
LockObtainFailedException[Can't lock shard [index-1][0], timed out after 5000ms];
at org.elasticsearch.index.IndexService.createShard(IndexService.java:389)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:601)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:501)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:166)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:610)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
After this the cluster goes into a Red state and everything stops working.
I there anything we can do to solve this? Maybe an older version that keeps the old behaviour?
Before creating the index we create a mount in the data folder (.../data/index1), then we create the index in ES and it just writes to the mounted folder in that path...
It looks like it should write everything (excluding snapshots) to the same folder, data/elasticsearch/nodes/0/indices/foo.
I realize that this is not guaranteed officially, but it's been doing it in our cluster for quite some time too...
Still, the issue we have shouldn't be related to that, in never tries to write anywhere else than where we expect. The problem comes when we delete the index. After failing to delete the folder the whole cluster breaks down and complains about not being able to obtain the lock.
That is a single index called foo.[quote="Jose_A_Garcia, post:7, topic:50209"]
Still, the issue we have shouldn't be related to that, in never tries to write anywhere else than where we expect. The problem comes when we delete the index. After failing to delete the folder the whole cluster breaks down and complains about not being able to obtain the lock.
[/quote]
You are using ES in a way it was never designed for. I don't know what the exact issue is but I'd suggest you stop this method and just use a disk (or multiple) for everything.
That's what I mean, everything for index "foo", goes into that folder. So when I create a mount at "data/elasticsearch/nodes/0/indices/foo" it works. Or it did, now it breaks when it tries to delete the folder.
Anyway, I realize that's not a common use case. I'll see if we can work around it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.