Frequently Corrupted shards in Index

Elasticsearch 1.6.2 with NEST lib 1.6.1

Index settings

{
   "suggestions": {
      "settings": {
         "index": {
            "creation_date": "1455208413594",
            "uuid": "GjqT1LSfT8Knx1GxISTvcg",
            "analysis": {
               "analyzer": {
                  "simple_analyzer": {
                     "type": "simple"
                  }
               }
            },
            "number_of_replicas": "1",
            "number_of_shards": "5",
            "refresh_interval": "30s",
            "version": {
               "created": "1060299"
            }
         }
      }
   }
}

Index mappings

{
   "suggestions": {
      "mappings": {
         "suggestion": {
            "properties": {
               "id": {
                  "type": "string",
                  "index": "not_analyzed",
                  "include_in_all": false
               },
               "searchedby": {
                  "type": "string",
                  "index": "not_analyzed",
                  "include_in_all": false
               },
               "suggest": {
                  "type": "completion",
                  "analyzer": "simple_analyzer",
                  "payloads": false,
                  "preserve_separators": true,
                  "preserve_position_increments": true,
                  "max_input_length": 50,
                  "context": {
                     "user": {
                        "type": "category",
                        "path": "searchedby",
                        "default": [
                           "orphans"
                        ]
                     }
                  }
               }
            }
         }
      }
   }
}

Scenario
I noticed timeouts and errors while indexing docs. Through the _segments endpoint I found that one shard was not responding. The shard was actually a primary shard.

In order to solve the issue, I stopped the node that had the un-responding shard. Its replica in the other node became a primary itself and a new replica was created. Then I restarted the stopped node and the shards where properly redistributed and everything works as it should.

I must note that the same thing happened again 3-4 times on the same index. This index is in production for > 1.5 year. Let me mention that we have more than one indexes (with different data and settings) on the same servers and none has encountered this issue before.

An interesting fact about how I update documents in the problematic index is that I use the Bulk update api and small Groovy scripts as part of the command that I send to the ES. Groovy scripts and Bulk indexing are not used in any othedr index. Could this cause any issue?

Question
Any ideas on why did this fail happens on this index? I'd be happy to provide you with additional information in case that it's necessary.

Infrastucture
3 servers in Azure - A5 size (2 cores, 14 GB RAM) - Ubuntu 14.04

Data in the index approximately
default routing - data uniformily distributed
size: 2.67Gi (5.36Gi)
docs: 8,436,794

Thank you in advance

Check your ES logs, there should be something about what is happening.
Also, upgrade. 1.6 is pretty old these days.