I wanted partial matching functionality on a field so I tried using the
nGram tokenizer in my index analyzer but just the standard tokenizer in my
search analyzer which worked perfectly.
However my question is how the performance scales when used with a large
amount of data because I would assume that this will result in a HUGE
amount of tokens in the index. Does anybody know if this is actually an
issue in elasticsearch, or would the best idea be to not use it on fields
with a lot of text (e.g. an article body) but smaller ones (e.g. an article
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
For more options, visit https://groups.google.com/groups/opt_out.