Elasticsearch Timeout's when we restart logstash

Recently we've been having a strange issue where when we restart some of our high-traffic logstash servers, we will stop receiving any new logs into elasticsearch for a significant amount of time. What happens is, elasticsearch starts producing a ton of logs which look like this one:

[2016-08-22 20:35:12,043][DEBUG][action.admin.indices.mapping.put] [elastic-m-uscen-c-c001-n003] failed to put mappings on indices [[x-holding-logs-docker-2016.08.22]], type [logs-docker]
ProcessClusterEventTimeoutException[failed to process cluster event (put-mapping [logs-docker]) within 30s]
        at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:343)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

It continues to throw these logs until presumably, whatever request these represent finally goes through. The problem though is that until it does, no logs make it into elasticsearch.

If most of these templates already exist, why would restarting logstash cause errors/failures like this, and why would that block logs from going through??

Thanks!