Hi,
I'm trying to build an index & query for address search autocomplete for City, State & Country.
I have a structured data fields:
- City,
- State
- Country
- Full (a complete field that concatenates City State & Country, separated by space)
I used ngram approach, and that was working fine when I'm using it on City+Country. But when the field is longer, the ngram approach is not working. for eg I'm not getting any result for query term 'san fran', but I get result for 'san francisco'
I'm using this for indexing:
{
"index_patterns": ["address_book*"],
"settings": {
"analysis": {
"analyzer": {
"autocomplete": {
"tokenizer": "autocomplete",
"filter": [
"lowercase"
]
},
"autocomplete_search": {
"tokenizer": "lowercase"
}
},
"tokenizer": {
"autocomplete": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 50,
"token_chars": [
"letter"
]
}
}
}
},
"mappings": {
"properties": {
"full": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "autocomplete_search"
},
"location": {
"type": "geo_point"
}
}
}
}
And the query is:
{
"query": {
"match": {
"full": {
"query": "san fran",
"operator": "and"
}
}
}
}
I'm guessing that the index field (full) needs to be tokenized if that is not happening already?
Any suggestion on how I can get the auto complete working on the partial matches?
Thank you