Old ElasticSearch Data & Upgrading Major Versions?

Hello,

Approaching the second time I'm attempting a major update across my elasticsearch, logstash & kibana stacks.

Previously I did this around version 1.7x of elasticsearch & logstash 1.5 -- I had to jettison all previous data (not sure the running version, but earlier) to get elasticsearch to start.

While this wasn't a huge deal as data was backed up and I had states to redeploy the older elk stack, I am again attempting to move to latest versions with elasticsearch 2.x & logstash 2.x and kibana 4.30. When testing on dev with the older data, elasticsearch refuses to start.

Starting elasticsearch: Exception in thread "main" java.lang.IllegalStateException: unable to upgrade the mappings for the index [logstash-2015.11.29], reason: [Mapper for [timestamp] conflicts with existing mapping in other types:
[mapper [timestamp] cannot be changed from type [date] to [string]]]
Likely root cause: java.lang.IllegalArgumentException: Mapper for [timestamp] conflicts with existing mapping in other types:
[mapper [timestamp] cannot be changed from type [date] to [string]]
at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:117)
at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:364)
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:315)
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:261)
at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:329)
at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:112)
at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:228)
at org.elasticsearch.gateway.GatewayMetaState.(GatewayMetaState.java:87)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at <<>>
at org.elasticsearch.node.Node.(Node.java:202)
at org.elasticsearch.node.Node.(Node.java:129)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
[FAILED]

Obviously some mapping issue, but I'm not sure how to resolve this. I'm assuming this is a logstash issue, the way its mapping data? Just wondering a) what causes this exactly & is there a way to resolve that and keep my old data? and b) if not, why and is it the fault of logstash or elasticsearch?

You should be running https://github.com/elastic/elasticsearch-migration before upgrading.

I've run the elasticsearch-migration tool and received the following issues on every index:

Conflicting field mappings
Mapping for field nginx-access:timestamp conflicts with: nginx-error:timestamp. Check parameters: format, norms.enabled, type
Mapping for field nginx-access:timestamp conflicts with: nginx-error:timestamp. Check parameters: format, norms.enabled, type
Fields with dots
Dots in field names lead to ambiguous field resolution, in fields: logs:metrics.http.%{response}.count, logs:metrics.http.%{response}.rate_15m, logs:metrics.http.%{response}.rate_1m, logs:metrics.http.%{response}.rate_5m, logs:metrics.http.200.count,

Have resolved the issue with dots in field names for metrics. However, I'm stuck on the timestamp conflicts.

What is the correct way to setup a logstash filter with respect to timestamping? Here is my basic config:

filter {
if [type] == "nginx-access" {
grok {
match => { "message" => "%{NGINXACCESS}" }
}

            date {
                  match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" , "YYYY-MM-dd HH:mm:ss,SSS" , "dd-MM-YYYY HH:mm:ss"]
            }
            geoip {
                    source => "clientip"
            }
            if [agent] != "-" and [agent] != "" {
                    useragent {
                            source => "agent"
                            target => "ua"
                    }
            }
            metrics {
                    meter => [ "http_%{response}" ]
                    add_tag => "metric"
                    flush_interval => "60"
            }
    }

}
filter {
if [type] == "nginx-error" {
grok {
match => { "message" => "%{NGINXERROR}" }
}

            date {
                  match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" , "YYYY-MM-dd HH:mm:ss,SSS" , "dd-MM-YYYY HH:mm:ss"]
            }
            geoip {
                    source => "clientip"
            }
            if [agent] != "-" and [agent] != "" {
                    useragent {
                            source => "agent"
                            target => "ua"
                    }
            }
            metrics {
                    meter => [ "http_%{response}" ]
                    add_tag => "metric"
                    flush_interval => "60"
            }
    }

Seems i have a date entry per filter, is this whats leading to the conflict?

And what's the mappings for those two fields? Do they have the same type?