Hi ,
I have an issue with the Japanese analyzer with "tokenizer": "kuromoji_tokenizer".
I want to make special character search along with this tokenizer . My analyzer is as below:
"analyzer_jp_stemmer": {
"filter": [
"lowercase",
"kuromoji_baseform",
"kuromoji_part_of_speech",
"cjk_width",
"kuromoji_stemmer",
"esc_stop"
],
"tokenizer": "kuromoji_tokenizer"
},
How can I include "whitespace" in this to make Japanese search working with special character?