I am creating elasticsearch index using java with elasticsearch-2.2.0 version. I am stuckup with an exception: "UnavailableShardsException"
I am using BULK API to create my index. Some times the index is being created successfully, some times I am getting the above exception.
I am wondering why this exception is coming only sometimes. This is causing me a big problem.
Please be patient when waiting for an answer to your questions. This is a community forum and as such it may take some time before someone replies to your question. Not everyone on the forum is an expert in every area so you may need to wait for someone who knows about the area you are asking about to come online and have the time to look into your problem.
There are no SLAs on responses to questions posted on this forum, if you require help with an SLA on responses you should look into purchasing a subscription package that includes support with an SLA such as those offered by Elastic: https://www.elastic.co/subscriptions
Thank You! I checked the logs but I am not able to understand the exact problem. Please find the log file content below:
[2016-05-11 03:20:05,157][DEBUG][action.admin.indices.stats] [Crazy Eight] [indices:monitor/stats] failed to execute operation for shard [[enduser][3], node[g60DVR92SseMGM69vgbzXQ], [P], v[2], s[STARTED], a[id=Myk36WGeQ8S92exXvDcjcg]]
ElasticsearchException[failed to refresh store stats]; nested: AccessDeniedException[C:\elasticsearch-2.2.0\data\elasticsearch\nodes\0\indices\enduser\3\index_c_Lucene50_0.tim];
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1534)
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1519)
at org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:55)
at org.elasticsearch.index.store.Store.stats(Store.java:293)
at org.elasticsearch.index.shard.IndexShard.storeStats(IndexShard.java:665)
at org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:134)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:409)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:388)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:375)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.file.AccessDeniedException: C:\elasticsearch-2.2.0\data\elasticsearch\nodes\0\indices\enduser\3\index_c_Lucene50_0.tim
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
... 15 more
[2016-05-11 03:46:25,196][INFO ][cluster.metadata ] [Crazy Eight] [enduser] creating index, cause [api], templates , shards [5]/[1], mappings
[2016-05-11 03:46:25,222][WARN ][indices.cluster ] [Crazy Eight] [[enduser][2]] marking and sending shard failed due to [failed recovery]
[enduser][[enduser][2]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: AccessDeniedException[C:\elasticsearch-2.2.0\data\elasticsearch\nodes\0\indices\enduser\2\translog];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:254)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: [enduser][[enduser][2]] EngineCreationFailureException[failed to create engine]; nested: AccessDeniedException[C:\elasticsearch-2.2.0\data\elasticsearch\nodes\0\indices\enduser\2\translog];
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:156)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1450)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1434)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:925)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:897)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)
... 5 more
I do not understand one thing that is: it is loading the data sometimes successfully, but it is failing sometimes. I am wondering how it is possible. I am using BATCH API to load the data.
Client Side Exception:
[172]: index [enduser], type [enduser], id [AVS4BSH5OtU25o4f73iq], message [UnavailableShardsException[[enduser][3] primary shard is not active Timeout: [1m], request: [shard bulk {[enduser][3]}]]]
Is there any configuration that needs to be done?.
I'm seeing the same thing you are. An index will be fine for weeks and then one day we get access denied errors. Frustrating.
This post indicates it can be an issue on Windows where you open Explorer on a different machine and navigate to the network drive housing your ES data. I closed all my connections from other workstations to that network drive. Maybe it will work.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.