Hello,
The default behaviour of ES to tokenize on slash does not suit me, because my data is really full of slashes.
Tokenizing on space is more relevent in my case, but how to copy the index with the good tokinizing settings?
Hello,
The default behaviour of ES to tokenize on slash does not suit me, because my data is really full of slashes.
Tokenizing on space is more relevent in my case, but how to copy the index with the good tokinizing settings?
You can use whitespace tokenizer in custom analyzer.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.