I found the solution..i was using ngram as filter instead of tokeniser. Below code snippets worked.
Created a index with below metadata,
curl -X PUT "localhost:9200/my_index" -H 'Content-Type: application/json' -d' { "settings": { "analysis": { "analyzer": { "autocomplete": { "tokenizer": "autocomplete", "filter": [ "lowercase" ] }, "autocomplete_search": { "tokenizer": "lowercase" } }, "tokenizer": { "autocomplete": { "type": "ngram", "min_gram": 1, "max_gram": 10, "token_chars": [ "letter", "digit", "punctuation", "symbol" ] } } } }, "mappings": { "_doc": { "properties": { "title": { "type": "text", "analyzer": "autocomplete", "search_analyzer": "autocomplete_search" } } } } }
Inserted 2 records...
curl -X PUT "localhost:9200/my_index/_doc/1" -H 'Content-Type: application/json' -d' { "title": "Quick Foxes" } 'curl -X PUT "localhost:9200/my_index/_doc/2" -H 'Content-Type: application/json' -d' { "title": "Quick Tiger" } '
Now if i search for "Quick Fo" with below query, it gives me only _doc 1, as it is matching both "Quick" and "Fo".
curl -X GET "localhost:9200/my_index/_search" -H 'Content-Type: application/json' -d' { "query": { "match": { "title": { "query": "Quick Fo", "operator": "and" } } } } '
This is what i wanted. But with ngram as filter it was giving result even if just "Quick" matches.