Why does logstash/elasticsearch re-put index mappings when I restart logstash?

Every time I restart logstash, It appears that it attempts to re-put a ton of index mappings. This turns out to be a problem for us, because we have a pretty large cluster, and so these calls take a little while to process, which leads to two things: none of the logs processed through a logstash server will be indexed until the first "put mapping" goes through (they get queued up in "pending tasks"), and secondly our cluster flickers from green to red until they all go through.

If those mappings were already set in our index, why would restarting logstash cause such a large set of commands to kick off??

Error log that we get a lot of for 10-30minutes after a logstash restart (index name varies a lot):

[2016-08-26 02:03:57,558][DEBUG][action.admin.indices.mapping.put] [] failed to put mappings on indices [[]], type [logs-docker]
ProcessClusterEventTimeoutException[failed to process cluster event (put-mapping [logs-docker]) within 30s]
       	at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:343)
       	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
       	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
       	at java.lang.Thread.run(Thread.java:745)

We ARE planning to break our cluster into smaller clusters connected with a tribe node in the future to reduce the latency/timeout issues, however I'd still like to understand what's going on here.

Thanks!!

What version are you one?
How do you know it's LS (and if it is then you should move this thread)?

I'm on logstash 2.1.0 and ES 2.2.0. I'm not sure which is the cause, Logstash, or ES, I just know that these issues directly correlate with when I restart our logstash instances.