Translog files corrupted, cluster failing to recover

After a system crash (disk issues) of a 2 node clusters (shard replication configured to 0), some of the translog tlog files were corrupted/missing in lots of my index. I was wondering if there is a way to recover those indexes event with some tlog file missing ?

In the logs, at the moment, I'm getting the following message when restarting the nodes:

[2016-08-19 14:06:04,511][WARN ][indices.cluster ] [Hyperion] [[myindex][3]] marking and sending shard failed due to [failed recovery]
[myindex][[myindex][3]] IndexShardRecoveryException[failed recovery]; nested: IllegalStateException[translog file doesn't exist with generation: 2 lastCommitted: -1 checkpoint: 8 - translog ids must be consecutive];
at org.elasticsearch.index.shard.StoreRecoveryService$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: java.lang.IllegalStateException: translog file doesn't exist with generation: 2 lastCommitted: -1 checkpoint: 8 - translog ids must be consecutive
at org.elasticsearch.index.translog.Translog.recoverFromFiles(
at org.elasticsearch.index.translog.Translog.(
at org.elasticsearch.index.engine.InternalEngine.openTranslog(
at org.elasticsearch.index.engine.InternalEngine.(
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(
at org.elasticsearch.index.shard.IndexShard.newEngine(
at org.elasticsearch.index.shard.IndexShard.createNewEngine(
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(
at org.elasticsearch.index.shard.StoreRecoveryService$
... 3 more

The problem occured on Elasticsearch 2.1 and I tried to solve the problem by migrating to 2.3.5 but it didn't solve the problem.

Bad translogs are not really a version specific issue. You can either process the original data again, or accept that this data is lost :frowning: