Elasticsearch is unable to recover

I once had a problem with elastic search exceeding file open limit. After that I have fixed the ulimit. But now the engine is unable to start. It keeps continuously throwing this, never ends:

[2015-11-17 19:21:08,468][WARN ][cluster.action.shard     ] [elastic1] [logstash-retailash-webserver-2015.11.05][4] received shard failed for [logstash-retailash-webserver-2015.11.05][4], node[E5AgYhAXQ96MTxp2as07UA], [P], v[5935], s[INITIALIZING], a[id=rLkDMZWSSpCXPLPQCJDetg], unassigned_info[[reason=ALLOCATION_FAILED], at[2015-11-17T19:21:08.437Z], details[failed recovery, failure IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: FileAlreadyExistsException[/var/elasticsearch-2.0.0/data/elasticsearch/nodes/0/indices/logstash-retailash-webserver-2015.11.05/4/translog/translog-2.ckp]; ]], indexUUID [vSsc9xChQPizFakSCuBNrA], message [failed recovery], failure [IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: FileAlreadyExistsException[/var/elasticsearch-2.0.0/data/elasticsearch/nodes/0/indices/logstash-retailash-webserver-2015.11.05/4/translog/translog-2.ckp]; ]
[logstash-retailash-webserver-2015.11.05][[logstash-retailash-webserver-2015.11.05][4]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: FileAlreadyExistsException[/var/elasticsearch-2.0.0/data/elasticsearch/nodes/0/indices/logstash-retailash-webserver-2015.11.05/4/translog/translog-2.ckp];
        at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:258)
        at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:60)
        at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:133)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: [logstash-retailash-webserver-2015.11.05][[logstash-retailash-webserver-2015.11.05][4]] EngineCreationFailureException[failed to create engine]; nested: FileAlreadyExistsException[/var/elasticsearch-2.0.0/data/elasticsearch/nodes/0/indices/logstash-retailash-webserver-2015.11.05/4/translog/translog-2.ckp];
        at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:135)
        at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
        at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1349)
        at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1344)
        at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:889)
        at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:866)
        at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:249)
        ... 5 more
Caused by: java.nio.file.FileAlreadyExistsException: /var/elasticsearch-2.0.0/data/elasticsearch/nodes/0/indices/logstash-retailash-webserver-2015.11.05/4/translog/translog-2.ckp
        at sun.nio.fs.UnixCopyFile.copy(UnixCopyFile.java:548)
        at sun.nio.fs.UnixFileSystemProvider.copy(UnixFileSystemProvider.java:253)
        at java.nio.file.Files.copy(Files.java:1227)
        at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:304)
        at org.elasticsearch.index.translog.Translog.<init>(Translog.java:166)
        at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:188)
        at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:131)
        ... 11 more
1 Like

You may need to delete this file, but you if you do you may also loose data.

1 Like

Thank you. Is there a way to know what data will be lost?

You can try looking in the file.