The limit you mentioned only applies to keyword datatype I believe. Unless your text does not have any space, when using text datatype the text is broken in several tokens.
Thanks for the reply. Does this mean if I give a text exceeding 32766 bytes, then the subtype raw given in the above mapping will be ignored and basically there will be only one indexing happening instead of two that is supposed to happen?
Am trying to migrate text fields to keyword as I do not require any analysis on my data. While doing so getting error that value of the keyword field cannot exceed 32766 bytes. But when a multifield raw keyword is added as in the mapping shown above I don't get any errors during indexing.
Basically I want to migrate to the below format:
"properties": {
"address": {
"type": "keyword"
}
Do you have an example which can reproduce the problem? As described in About the Elasticsearch category. It will help to better understand what you are doing. Please, try to keep the example as simple as possible.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.