Hi all,
Did anyone try using hadoop-fuse mounted file system to store ElasticSearch index/meta data? Does ElasticSearch support this?
I am getting these errors.
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [order][0] failed recovery
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:228)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.elasticsearch.index.engine.EngineCreationFailureException: [order][0] Failed to open reader on writer
at org.elasticsearch.index.engine.robin.RobinEngine.start(RobinEngine.java:281)
at org.elasticsearch.index.shard.service.InternalIndexShard.start(InternalIndexShard.java:272)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:132)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
... 3 more
Caused by: java.io.IOException: Operation not supported
at java.io.RandomAccessFile.writeBytes(Native Method)
at java.io.RandomAccessFile.write(RandomAccessFile.java:466)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:448)
What are the guidelines to scale large data in terms of disk space and memory? How does hadoop gateway help?
Thanks in advance for all your inputs.