Hi guys,
I've just upgraded my elasticsearch cluster with full cluster restart.
I have two defined indices.
Now when I started my data nodes, I get an exception on each index: Field [_id] is defined twice in [myType]
.
How can I fix this?
Hi guys,
I've just upgraded my elasticsearch cluster with full cluster restart.
I have two defined indices.
Now when I started my data nodes, I get an exception on each index: Field [_id] is defined twice in [myType]
.
How can I fix this?
I'm seeing this as well
This bulk indexing request now fails in 2.2.0
{"index":{"_index":"test_5zogjdzthm_v2","_type":"test","_id":"obj_5"}}
{"id":"obj_5","created_at":"2016-02-11T11:32:43.184Z","updated_at":"2016-02-11T11:32:43.184Z","_type":"test"}
The fix is obvious: don't repeat the _type in the object.
Yeah i'm having the same error with mongoosastic.. its complaining about duplicated "_id". https://github.com/mongoosastic/mongoosastic/issues/165
I have same issue I'm receiving a lot of logs like this:
[2016-03-02 15:23:56,427][WARN ][cluster.action.shard ] [esearch] [streetlib-dev3][2] received shard failed for [streetlib-dev3][2], node[6Hrw6K6mRKWbnl6L9YC8sw], [P], v[847], s[INITIALIZING], a[id=TK2N4o7yT-GCqkL_N_Rnew], unassigned_info[[reason=ALLOCATION_FAILED], at[2016-03-02T14:23:51.370Z], details[failed to update mappings, failure IllegalArgumentException[Field [_id] is defined twice in [catalog]]]], indexUUID [7EfzsOa_TwSuUBycDPyiqA], message [failed to update mappings], failure [IllegalArgumentException[Field [_id] is defined twice in [catalog]]]
java.lang.IllegalArgumentException: Field [_id] is defined twice in [catalog]
at org.elasticsearch.index.mapper.MapperService.checkFieldUniqueness(MapperService.java:377)
at org.elasticsearch.index.mapper.MapperService.checkMappersCompatibility(MapperService.java:385)
at org.elasticsearch.index.mapper.MapperService.checkMappersCompatibility(MapperService.java:411)
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:314)
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:272)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:388)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:348)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:164)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:600)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I fixed it as I can use the search now, but how can I fix the indexes so I don't receive this warning anymore?
This is new validation that we added in 2.2 to assert that mappings could not define the same field twice, as it could result in ambiguity at search time: elasticsearch would not know which field to pick. For new indices (created on or after 2.2) then elasticsearch will make sure to reject mapping updates that try to introduce such broken mappings. However if you are upgrading from an older release and have an old index that has this problem, the only way to fix is to reindex.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.