Shard failing after a cluster restart

Today I restarted my cluster and when it came back online all of a sudden I was getting a status:red from Marvel saying I had unassigned shards. I took a look at the logs and found this.... You can see the top of the log is where I restarted the server and then it immediately fails and throws an EOFException. I'm confused how this happened.. I only have 1 node in my cluster so I always just sudo server elasticsearch restart to reboot the server and that usually works just fine. However, this time something bad happened.

Does anyone know why this occurred? Can I recovery this shard (I lost 20 million documents after the shard got lost)? What I can do to prevent it from happening again?

UPDATE
I was able to find this issue that is identical to what we are seeing (I am on 1.5.2 as well) https://github.com/elastic/elasticsearch/issues/11249

The only thing is we have plenty of disk space so I'm not sure how that could be what the real exception was according to the guy that fixed this issue

[2016-05-19 17:58:07,892][INFO ][node ] [Death Adder] version[1.5.2], pid[51254], build[62ff986/2015-04-27T09:21:06Z] [2016-05-19 17:58:07,892][INFO ][node ] [Death Adder] initializing ... [2016-05-19 17:58:07,907][INFO ][plugins ] [Death Adder] loaded [marvel], sites [marvel] [2016-05-19 17:58:11,705][INFO ][node ] [Death Adder] initialized [2016-05-19 17:58:11,706][INFO ][node ] [Death Adder] starting ... [2016-05-19 17:58:11,928][INFO ][transport ] [Death Adder] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.1.0.4:9300]} [2016-05-19 17:58:11,951][INFO ][discovery ] [Death Adder] skykicksearch/v2a0Um22RpyG4FQah3i_pg [2016-05-19 17:58:15,725][INFO ][cluster.service ] [Death Adder] new_master [Death Adder][v2a0Um22RpyG4FQah3i_pg][skykicksearch][inet[/10.1.0.4:9300]], reason: zen-disco-join (elected_as_master) [2016-05-19 17:58:15,852][INFO ][http ] [Death Adder] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.1.0.4:9200]} [2016-05-19 17:58:15,852][INFO ][node ] [Death Adder] started [2016-05-19 17:58:16,958][INFO ][indices.breaker ] [Death Adder] Updating settings parent: [PARENT,type=PARENT,limit=15007979929/13.9gb,overhead=1.0], fielddata: [FIELDDATA,type=MEMORY,limit=1607997849 6/14.9gb,overhead=1.03], request: [REQUEST,type=MEMORY,limit=8575988531/7.9gb,overhead=1.0] [2016-05-19 17:58:16,962][INFO ][indices.store ] [Death Adder] updating indices.store.throttle.max_bytes_per_sec from [20mb] to [200mb], note, type is [MERGE] [2016-05-19 17:58:17,122][INFO ][gateway ] [Death Adder] recovered [34] indices into cluster_state [2016-05-19 17:58:22,935][WARN ][indices.cluster ] [Death Adder] [[backup_v2][22]] marking and sending shard failed due to [failed recovery] org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [backup_v2][22] failed recovery at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:162) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.elasticsearch.index.engine.EngineCreationFailureException: [backup_v2][22] failed to upgrade 3x segments at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:121) at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:32) at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1262) at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1257) at org.elasticsearch.index.shard.IndexShard.prepareForTranslogRecovery(IndexShard.java:784) at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:226) at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:112) ... 3 more Caused by: java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/elasticsearch-data/data/skykicksearch/nodes/0/indices/backup_v2/22/index/segments_7sg") at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)