How to and efficient way to combine standard tokenizer with autocomplete (type ahead) functionality

Trying to use the autocomplete functionality which I have in place using
the following mappings

analysis : filter : placename_ngram : max_gram=15, min_gram = 2, type =
analyzer :index: filter : lowercase, placename_ngram, tokenizer : keyword
placename_search : filter: lowercase: tokenizer keyword

This works great for type ahead but when I'm trying to find a value like
"contains in" it doesn't return the record.

Such as
If I'm doing a text query on "Lake".

I will only get

Lake Wood,

But will not get
Smithtown Lake

I have the field setup as multi-field and can do wildcard to find the
values but not sure if this is efficient.

I believe I can use NGRAM but that seems like alot of overhead considering
I only need index terms by whitespace (or by word). Not every permetation.

Any thoughts?

When I change the tokenizer on both to "standard"....It will then find
these records...but my autocomplete gets messed up and brings back
Smithtown Lake when typing Lak..... (which in this case I don't want).

Thanks for your help

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
For more options, visit