Problem with index folders in ES 2.3.2

Hi,

We are migrating from an old elasticsearch version (1.4) to the latest (2.3.2). We are doing some tests and I'm having some problems.

Our deployment is a bit unconventional, we have several indexes, each one using a different mount point (when we create a new index we also create a mount point with the same name in the data folder, so that ES can store the data in there). The mount points are local to the server, not remote storage.

This works quite well in ES 1.4, but in the latest version we have issues when we try to delete an index. When we send the delete call to elasticsearch the index folder is still mounted, so it cannot be deleted (you get a resource busy error). In the old version it didn't print any errors, it just deleted the contents, leaving the folder, and continued.

With the latest we get several exceptions:
java.nio.file.FileSystemException: Device or resource busy (which was expected)

and:

LockObtainFailedException[Can't lock shard [index-1][0], timed out after 5000ms];
at org.elasticsearch.index.IndexService.createShard(IndexService.java:389)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:601)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:501)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:166)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:610)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

After this the cluster goes into a Red state and everything stops working.

I there anything we can do to solve this? Maybe an older version that keeps the old behaviour?

Thanks!

How do you even do this? ES cannot store a specific index on a specific mount/

Before creating the index we create a mount in the data folder (.../data/index1), then we create the index in ES and it just writes to the mounted folder in that path...

Ok, this is a really bad idea. *Really ,really bad.

You cannot guarantee that ES will only write that index to that mount.

So that means that all the data for a particular index is not guaranteed to be under the same folder in a node?

For what I've seen it creates a folder with the name of the index in the data directory and stores everything inside. At least in the old versions.

Yes.

This has never been guaranteed behaviour.

Looking at this link:

It looks like it should write everything (excluding snapshots) to the same folder, data/elasticsearch/nodes/0/indices/foo.

I realize that this is not guaranteed officially, but it's been doing it in our cluster for quite some time too...

Still, the issue we have shouldn't be related to that, in never tries to write anywhere else than where we expect. The problem comes when we delete the index. After failing to delete the folder the whole cluster breaks down and complains about not being able to obtain the lock.

That is a single index called foo.[quote="Jose_A_Garcia, post:7, topic:50209"]
Still, the issue we have shouldn't be related to that, in never tries to write anywhere else than where we expect. The problem comes when we delete the index. After failing to delete the folder the whole cluster breaks down and complains about not being able to obtain the lock.
[/quote]

You are using ES in a way it was never designed for. I don't know what the exact issue is but I'd suggest you stop this method and just use a disk (or multiple) for everything.

That's what I mean, everything for index "foo", goes into that folder. So when I create a mount at "data/elasticsearch/nodes/0/indices/foo" it works. Or it did, now it breaks when it tries to delete the folder.

Anyway, I realize that's not a common use case. I'll see if we can work around it.