No Shard Available ActionException After upgrading my ELS to 2.34

After upgrading my elasticsearch , I tried to post some data into this. But getting the below error message and my ELS state went to RED.

Log:
[2016-07-28 11:38:09,641][DEBUG][action.search ] [hk-node-1] All shards failed for phase: [query]
[prod1-2016.07.28][[prod1-2016.07.28][4]] NoShardAvailableActionException[null]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.start(AbstractSearchAsyncAction.java:129)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:115)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:47)
at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:149)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85)
at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:63)
at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:39)
at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:149)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52)
at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.doExecute(BaseRestHandler.java:83)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
at org.elasticsearch.client.support.AbstractClient.multiSearch(AbstractClient.java:612)
at org.elasticsearch.rest.action.search.RestMultiSearchAction.handleRequest(RestMultiSearchAction.java:74)
at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:54)
at org.elasticsearch.rest.RestController.executeHandler(RestController.java:205)
at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166)
at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128)
at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86)
at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:449)
at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:61)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:485)

I upgraded Elasticsearch from 1.7.5 to 2.3.4. and there is no issue on getting old data thats stored in ELS. Only I'm unable to store the new data due to the ELS went to RED state.

Please someone help on this.

I just removed the indexed that was in "red" state and parse the logs again. this time, I got some more logs in elasticsearch.

Also new shards are went to unassigned section

Indices:

[root@logstash conf.d]# curl http://localhost:9200/_cat/indices?pretty
yellow open prod2-2016.06.20     5 1   2 0 22.5kb 22.5kb 
red    open prod1-2016.07.28 5 1                     
yellow open prod1-2016.06.20 5 1 347 0  2.3mb  2.3mb 
yellow open .kibana           1 1   4 0 44.2kb 44.2kb 
yellow open og-2016.07.09     5 1   1 0 41.1kb 41.1kb 
yellow open og-2015.12.06     5 1   1 0   11kb   11kb 
yellow open prod1-2016.06.17 5 1   2 0 58.4kb 58.4kb 
yellow open prod1-2016.06.28 5 1   2 0 16.7kb 16.7kb 
[root@logstash conf.d]#

Cluster Health Log: [ You can see , newly created index shards are went to unassigned , not sure what is the reason]

[root@logstash conf.d]# curl http://localhost:9200/_cluster/health?pretty
{
"cluster_name" : "mycluster",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 31,
"active_shards" : 31,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 41,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 43.05555555555556
}
[root@logstash conf.d]#

Elasticsearch Log:
[2016-07-28 12:17:08,885][WARN ][action.bulk ] [hk-node-1] unexpected error during the primary phase for action [indices:data/write/bulk[s]], request [BulkShardRequest to [prod1-2016.07.28] containing [2] requests]
[prod1-2016.07.28] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.routing.RoutingTable.shardRoutingTable(RoutingTable.java:108)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:461)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:547)
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.clusterChanged(ClusterStateObserver.java:182)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:628)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-07-28 12:17:38,997][INFO ][cluster.metadata ] [hk-node-1] [prod1-2016.07.28] creating index, cause [auto(bulk api)], templates [scalegdn_iprotecs], shards [5]/[1], mappings [service-prod1-api, default]

No one hit with this issue. Still I'm unable to fix it. can anyone help on this?

Any idea ?

Not sure if you ever got this figured out but I had a very similar issue today. Pretty sure the issue was that the cluster setting cluster.routing.allocation.enable was set to none during the upgrade and never set back. You can check with:

$ curl -XGET 'http://localhost:9200/_cluster/settings?pretty'
{
  "persistent" : {
    "cluster" : {
      "routing" : {
        "allocation" : {
          "disable_allocation" : "false",
          "enable" : "none"
        }
      }
    }
  },
  "transient" : { }
}

I changed it to "all" with:

# curl -XPUT localhost:9200/_cluster/settings -d '{
    "persistent" : {
        "cluster.routing.allocation.enable" : "all"
    }
}'

and everything started work again. -- Bud