I am using keyword
tokenizer for one of my name
fields.
"analyzer": {
"lowercase_keyword_analyzer": {
"filter": [
"lowercase"
],
"tokenizer": "keyword"
}
}
...
"name": {
"type": "text",
"analyzer":"lowercase_keyword_analyzer"
}
...
Unfortunately when I do the query_string
search , my search phrase gets broken into multiple terms even though I've explicitly specified my lowercase_keyword_analyzer
analyzer.
Query:
{"query": {"query_string" : {"default_field": "name", "query": "some-long (specific),phrase with spaces and wild*", "analyzer": "lowercase_keyword_analyzer", "quote_analyzer": "lowercase_keyword_analyzer"}}}
Explain:
"explanation":"+(name:some-long name:specific name:,phrase name:with name:spaces name:and name:wild*) #(#_type:substance)"}
Debug analyze:
{
"analyzer": "lowercase_keyword_analyzer",
"text": "some-long (specific),phrase with spaces and wild*"
}
{"token":"some-long (specific),phrase with spaces and wild*","start_offset":0,"end_offset":49,"type":"word","position":0}
Why query_string
ignores the analyzer?
I've found this old topic from 2011 where someone asks about similar problem, with no real answer: Query string not working with keyword tokenizer