I am seeing the following errors when trying to reindex all of my 2016 indexes to get rid of fields with dots in them so I can upgrade to Elasticsearch 2.x. I am seeing ArrayIndexOutOfBoundsException errors if I do all of 2016 (logstash-2016.*) or a single month of indexes (logstash-2016.01.*) with logstash and Elasticsearch 1.7.5.
Here are the two kinds of errors that are kicked out.
A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Elasticsearch hosts=>["lame-02"], index=>"logstash-2016.*", size=>10, scroll=>"5m", docinfo=>true, scan=>true, codec=><LogS
tash::Codecs::JSON charset=>"UTF-8">, query=>"{\"query\": { \"match_all\": {} } }", docinfo_target=>"@metadata", docinfo_fields=>["_index", "_type", "_i
d"], ssl=>false>
  Error: [500] {"error":"ArrayIndexOutOfBoundsException[null]","status":500} {:level=>:error}
A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Elasticsearch hosts=>["lame-02"], index=>"logstash-2016.*", size=>10, scroll=>"5m", docinfo=>true, scan=>true, codec=><LogS
tash::Codecs::JSON charset=>"UTF-8">, query=>"{\"query\": { \"match_all\": {} } }", docinfo_target=>"@metadata", docinfo_fields=>["_index", "_type", "_i
d"], ssl=>false>
  Error: [500] {"error":"ArrayIndexOutOfBoundsException[-131071]","status":500} {:level=>:error}
and here is the config I am using with Logstash 2.2.2:
input {
  elasticsearch {
    hosts =>  "client-02"
    index => "logstash-2016.01.*"
    size => 1000
    scroll => "5m"
    docinfo => true
    scan => true
  }
}
filter {
    metrics {
        meter           => "events"
        add_tag         => "metric"
        flush_interval  => 10
    }    
}
output {
    if "metric" in [tags] {
        stdout {        
            codec => line {
                format => "count: %{[events][count]} rate: %{[events][rate_1m]}"
            }
        }
    } else {
    
        elasticsearch {
           hosts => "localhost"
	       index => "%{[@metadata][_index]}"
           document_type => "%{[@metadata][_type]}"
           document_id => "%{[@metadata][_id]}"
           flush_size      => 250
           idle_flush_time => 10
           workers         => 4
        }    
    }
}
I was able to reindex logstash-2015.* which only had 66 total indexes and 179323407 documents with the same config without issue.  The logstash-2016.* index total is 97 and has 262633593 documents. Trying a single month in 2016 (logstash-2016.01.*) which has 31 indexes and 81231923 documents results in the same kind of errors.  I also tried dropping the size down from 1000 to 100 and the error persisted. I can reindex a single 2016 month at a time but that is going to be a slow process.
Searching around I see that this error might be related https://github.com/elastic/elasticsearch/issues/7926 but it seems like that was fixed already. Any help on tracking down this issue is greatly appreciated.