Tokens outside the ngram size

I am in the process of setting up elasticsearch in our infrastructure. The initial setup includes a custom analyzer with an ngram filter (min 3/max 20) set as the index_analyzer on two mappings. We have one free-form text field that searches will be entered from (Google style). Currently this text is being used to search the _all field with a query_string query.

While testing I noticed that tokens smaller than the ngram size are not being matched during searches. My expectation is that perfect matches would still result in the document matching.

Is there a way to configure the analyzer so that the _all field can be searched with ngrams and still find perfect matches outside the ngram size?

--
John Downey

You mean the _all field to have two different analyzers? No, it can only have one.

On Wednesday, January 25, 2012 at 9:19 PM, John Downey wrote:

I am in the process of setting up elasticsearch in our infrastructure. The initial setup includes a custom analyzer with an ngram filter (min 3/max 20) set as the index_analyzer on two mappings. We have one free-form text field that searches will be entered from (Google style). Currently this text is being used to search the _all field with a query_string query.

While testing I noticed that tokens smaller than the ngram size are not being matched during searches. My expectation is that perfect matches would still result in the document matching.

Is there a way to configure the analyzer so that the _all field can be searched with ngrams and still find perfect matches outside the ngram size?

--
John Downey