Need help with ES Query

Hi,

I'm using an analyzer which includes a standard tokenizer and
lowercase,asciifolding,suggestions_shingle and edgengrams as token filters
in it. The analyzer is same for both indexing and searching. So, for a text
like delhi to goa will be analyzed like:

{
"tokens" : [ {
"token" : "de",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 1
}, {
"token" : "del",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 1
}, {
"token" : "delh",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 1
}, {
"token" : "delhi",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 1
}, {
"token" : "de",
"start_offset" : 0,
"end_offset" : 8,
"type" : "word",
"position" : 1
}, {
"token" : "del",
"start_offset" : 0,
"end_offset" : 8,
"type" : "word",
"position" : 1
}, {
"token" : "delh",
"start_offset" : 0,
"end_offset" : 8,
"type" : "word",
"position" : 1
}, {
"token" : "delhi",
"start_offset" : 0,
"end_offset" : 8,
"type" : "word",
"position" : 1
}, {
"token" : "delhi ",
"start_offset" : 0,
"end_offset" : 8,
"type" : "word",
"position" : 1
}, {
"token" : "delhi t",
"start_offset" : 0,
"end_offset" : 8,
"type" : "word",
"position" : 1
}, {
"token" : "delhi to",
"start_offset" : 0,
"end_offset" : 8,
"type" : "word",
"position" : 1
}, {
"token" : "de",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "del",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "delh",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "delhi",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "delhi ",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "delhi t",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "delhi to",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "delhi to ",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "delhi to g",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "delhi to go",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "delhi to goa",
"start_offset" : 0,
"end_offset" : 12,
"type" : "word",
"position" : 1
}, {
"token" : "to",
"start_offset" : 6,
"end_offset" : 8,
"type" : "word",
"position" : 2
}, {
"token" : "to",
"start_offset" : 6,
"end_offset" : 12,
"type" : "word",
"position" : 2
}, {
"token" : "to ",
"start_offset" : 6,
"end_offset" : 12,
"type" : "word",
"position" : 2
}, {
"token" : "to g",
"start_offset" : 6,
"end_offset" : 12,
"type" : "word",
"position" : 2
}, {
"token" : "to go",
"start_offset" : 6,
"end_offset" : 12,
"type" : "word",
"position" : 2
}, {
"token" : "to goa",
"start_offset" : 6,
"end_offset" : 12,
"type" : "word",
"position" : 2
}, {
"token" : "go",
"start_offset" : 9,
"end_offset" : 12,
"type" : "word",
"position" : 3
}, {
"token" : "goa",
"start_offset" : 9,
"end_offset" : 12,
"type" : "word",
"position" : 3
} ]
}

Now, the problem which I'm facing is while querying for "delhi t" I'm not getting documents which contains maximum matches for the analyzed tokens of "delhi t" on the top:
Instead I get docs which contain only "delhi" on the top.

I think ES find docs which have maximum match for a certain analyzed search field which is not happening over here. Can anyone please tell me why is it not working ? IS there any other query type like match or boolean query which I need to use ?

Any help will be appreciated.

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f5e3eabb-a9de-4d20-9a2e-29bed3c6bed4%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.