Conflicts with existing mapping

Hi

while doing PutMappuing i am getting following error

"Mapper for [uUserId] conflicts with existing mapping in other types:\n[mapper [uUserId] has different [analyzer], mapper [uUserId] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types., mapper [uUserId] is used by multiple types. Set update_all_types to true to update [search_quote_analyzer] across all types., mapper [uUserId] is used by multiple types. Set update_all_types to true to update [fielddata] across all types.]"

Settings added for index

ElasticsearchResponse<Stream> rRes = objClient.IndicesCreate<Stream>(strIndex, @"
                            {
                                ""settings"": {
                                    ""analysis"": {
                                        ""analyzer"": {
                                            ""Custom_Casing_Analyzer"": {
                                                ""type"": ""custom"",
                                                ""tokenizer"": ""keyword"",
                                                ""filter"": [
                                                    ""lowercase""
                                                ]
                                            }
                                        }
                                    }
                                }       
                            }");

mapping for uUserId

 ElasticsearchResponse<string> rMapping = objClient.IndicesPutMapping<string>(strIndex, strType, @"
                                        {
                                          ""properties"": {
                                                ""uUserId"": {
                                                    ""type"" : ""text"",
                                                    ""analyzer"": ""Custom_Casing_Analyzer"", 
                                                    ""search_analyzer"": ""Custom_Casing_Analyzer"",
                                                    ""fielddata"": true,
                                                    ""fields"" : {
                                                        ""keyword"" : {
                                                            ""type"" : ""keyword"",
                                                        ""ignore_above"" : 256
                                                        }
                                                    }
                                                }
                                            }
                                        }");

Thanks
Aneesh L

Probably because your index already exists.
Remove it first.

I deleted the node folders from mine and copied from another system
There index created with Custom_Casing_Analyzer settings.

In Kibana when i execute GET /Student/_settings

{
  "Student": {
    "settings": {
      "index": {
        "number_of_shards": "5",
        "provided_name": "Student",
        "creation_date": "1518524267463",
        "analysis": {
          "analyzer": {
            "Custom_Casing_Analyzer": {
              "filter": [
                "lowercase"
              ],
              "type": "custom",
              "tokenizer": "keyword"
            }
          }
        },
        "number_of_replicas": "1",
        "uuid": "SPo0vE6OTc-K6BcYQo04rw",
        "version": {
          "created": "6010199"
        }
      }
    }
  }

}

But while getting data, trying to PutMapping for the column, need to avoid case check, there getting conflict issue

1 Like

What gives:

GET /Student/_mapping

Its giving default setting for all columns

"uUserId": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },

That's why your put mapping call is failing. You can not update an existing field.

1 Like

Any solution to add an extra property in mapping

If you mean : adding a new field, then that will work.
If you mean changing an existing field: you need to reindex

"analyzer": "Custom_Casing_Analyzer",
"search_analyzer": "Custom_Casing_Analyzer",

this analyzer need to add (No need to change existing mappings)
how i can add this?

You MUST reindex.

while creating an index is it possible to set mapping for all columns with that analayzer?

Yes.

How we can do that?

ElasticsearchResponse<Stream> rRes = objClient.IndicesCreate<Stream>(strIndex, @"
                            {
                                ""settings"": {
                                    ""analysis"": {
                                        ""analyzer"": {
                                            ""Custom_Casing_Analyzer"": {
                                                ""type"": ""custom"",
                                                ""tokenizer"": ""keyword"",
                                                ""filter"": [
                                                    ""lowercase""
                                                ]
                                            }
                                        }
                                    }
                                }       
                            }");

for this entity I have 10 columns, how will set for those 10 columns?

DELETE test 
PUT test
{
  "settings": {
    "analysis": {
      "analyzer": {
        "Custom_Casing_Analyzer": {
          "type": "custom",
          "tokenizer": "keyword",
          "filter": [
            "lowercase"
          ]
        }
      }
    }
  },
  "mappings": {
    "doc": {
      "properties": {
        "foo1": {
          "type": "text",
          "analyzer": "Custom_Casing_Analyzer"
        },
        "foo2": {
          "type": "text",
          "analyzer": "Custom_Casing_Analyzer"
        }
      }
    }
  }
}

foo1 and foo2 are the column name right?
Is there any way to apply analyser without giving column name (apply all)?
Because number of columns is more for the index

Have a look at https://www.elastic.co/guide/en/elasticsearch/reference/6.2/dynamic-templates.html

Thanks it worked..

But for one index when adding data got following issue

Document contains at least one immense term in field="Template.Files.de.tCertificateTemplateHTML" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[60, 109, 101, 116, 97, 32, 104, 116, 116, 112, 45, 101, 113, 117, 105, 118, 61, 34, 99, 111, 110, 116, 101, 110, 116, 45, 116, 121, 112, 101]...', original message: bytes can be at most 32766 in length; got 40669

How I can increase that limit?

Could you share a sample document on gist.github.com?

I have some template data as HTML, that string is too long

Ok. So I'm going to answer without knowing a lot. My answer might be wrong.

Why do you want to use keyword tokenizer for html content?