Word delimeter graph doesn't work with ICU tokenizer

Hi, I want to be able to search word like "Wi-Fi" by wifi and it is possible with word delimeter graph with whitespace tokenizer.

But I need the icu_tokenizer and then it doesn't work at all, it indexes only "wi" and "fi"
Is there anyway to make it work as expected? Is it a bug or a limitation?

GET /_analyze
{
  "filter": [
    {
      "type": "word_delimiter_graph",
      "catenate_all": "true"
    },
    "asciifolding", "icu_normalizer"
  ],
  "tokenizer": "icu_tokenizer",
  "text": "późno wi-fi  向日葵 สวัสดี ผมมาจากกรุงเ ทพฯ"
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.