I use icu_tokenizer for my index, data is Traditional Chinese
Normally keyword "珍珠奶茶" will be tokenized into 2 words "珍珠" and "奶茶"
it's good with almost keywords I have but with some keywords (~200 keywords) I don't want them to be tokenized so I want to configure tokenization dictionary those 200 keywords
could anyone help me how to configure it?
Thank you and best regards