Failed to process cluster event (put-mapping) within 30s

Currently, I am running Elasticsearch in cluster mode and there are 6 nodes. My logs are flooded with below error message. Also we are creating daily basis indexes and each having 10 shards (according to the template). If we change the shard count of the template, will it fix this issue for newly creating indexes.

[2020-11-02T01:00:37,838][DEBUG][o.e.a.b.TransportShardBulkAction] [node-1-master] [our_index_name-2020.11.02][0] failed to execute bulk item (index) index {[our_index_name-2020.11.02][doc][xUJmh3UBIZmKIq_CnYeh], source[our_JSON]}
org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-mapping) within 30s
        at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$0( ~[elasticsearch-6.5.1.jar:6.5.1]
        at java.util.ArrayList.forEach( ~[?:1.8.0_161]
        at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$1( ~[elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ ~[elasticsearch-6.5.1.jar:6.5.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_161]
        at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_161]
        at [?:1.8.0_161]
[2020-11-02T01:22:42,481][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1-master] fatal error in thread [elasticsearch[node-1-master][management][T#5]], exiting

Also please refer cluster status

#curl -X GET "localhost:9200/_cluster/health?pretty"

  "cluster_name" : "cluster_name",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 6,
  "number_of_data_nodes" : 5,
  "active_primary_shards" : 17092,
  "active_shards" : 17426,
  "relocating_shards" : 0,
  "initializing_shards" : 12,
  "unassigned_shards" : 15854,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 1758,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 3709380,
  "active_shards_percent_as_number" : 52.34290520245104

Can I get some help on this?


  1. Two things to have in mind:

You have a very old version. You should at least upgrade to the latest 6.8 version.

  1. You probably have too many shards per node.

May I suggest you look at the following resources about sizing:


Hi dadoonet,

Thanks for the feedback

This is a suddenly occurred issue. Previously this was functioning well. So we are seeing a large number of unassigned shards (more than 15000). is there any way to fix this issue reducing or deleting that unassigned shard count?

Also we noticed when initializing the unassigned shard count, the Elasticsearch cluster is hanging and become to unresponsive status.

We are hoping to fix this issue without touching our existing configuration, because this is a suddenly occurred issue and previously this was functioning well.

That is often the problem with having an oversharded cluster - it works well for a while and at some point falls over. At that point it is often very hard to rectify the problem as the number of tasks and unassigned shards pile up and a lot of cluster state updates will be required to get it right, which often is a slow process. You should therefore look to dramatically reduce the number of shards and not continue with your current approach.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.