Standard analyzer

Hello, Elasticsearch Member,

I used whitespace tokenizer with lowercase filter. but it turned out that it took a longer time than standard analyzer. But I need to split sentence only by white spaces and I also need to use lowercase filter. Standard analyzer is very right for that. But the problem is that it also removes special characters.
I need to search for data containing special characters.
Standard analyzer has stop words options. But I think it just removes terms that matches.

I hope you can help me with this.
Thank you in advance!

Can you share more information about the comparison tests you done (what are the 2 scenarii?) and what were the differences you have seen?

I use default standard analyzer and 1 TB per index. When I search for data from the last five minutes, it takes a minute. But when I changed to whitespace analyzer, it took a longer time (about 20 minutes, I guess). So I had to revert it.

When I search for data from the last five minutes, it takes a minute.

WHAT? One minute for a search response?

What does your search query looks like?

Actually, It is quite a long query. I am sorry. All I want to do is that using standard analyzer with special characters. Any advice?

I can't help without knowing what you are doing.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.