That's because Elasticsearch expects all timestamps to be in UTC time. Logstash simply mirrors this, and that's why index rollover is at UTC. If Elasticsearch gets timestamps that are not in UTC (and do not have a proper offset affixed) it can and will cause problems, because other time data will be in UTC, and then there will be conflicts.
I understand the natural desire to have data compartmentalized into logical containers that make sense. There are ways Elasticsearch provides to help you get around this particular UTC vs. local time conundrum. In the future, you may not even want to use indices with a date-stamp in the name (though you're always free to do so if you choose). See the beauty of the new Rollover API in the Elasticsearch documentation to understand what I'm talking about. But that will require the field_stats API.
The field_stats API helps to isolate which indices have the data that you want to analyze. This is what Kibana does under the hood when you provide an index pattern. You select a time window in the picker, and it queries only those indices which have data in the time window. After using the field_stats API to select indices, you would just use a range filter for the timeframe you want.
If you have an index retention policy issue, Curator 4 has an age filter that uses the field_stats API. That way nothing gets purged before you want it to be.