I'm running it in what I consider a pretty standard way, filebeat on our servers shipping logs to logstash which then outputs to Elasticsearch. I'm using the logstash-* index pattern and using daily indexes.
Recently I noticed that logstash was marking some fields of our haproxy logs as text when they should be numeric. I'm using the default HAPROXYHTTP grok pattern, seems like it should have known to set these fields to numeric but that's another issue I suppose.
Anyways I recently adjusted our logstash config to mutate these fields to float. That part seems to be working but I know have a mapping conflict in kibana.
I've been doing some reading this morning on re-indexing and to be honest it's not totally clear to me how this works. It sounds like I'd need to copy all my existing indexes to new ones? I'm not entirely sure how this would work because it seems like the logstash-* is "special" in some ways in that kibana and other things expect and know how to use my data in the indexes that are named that way. Any help would be much appreciated.
Another option I'm wondering about is if I just wait for our old indexes that have these fields as a string to be removed then I think that will resolve my conflict? This is probably the lazy way of doing it but I think this might also work? We only keep about 20 days of indexes so it wouldn't be the end of the world to have this mapping conflict in place for a little bit.