index/_analyze
{
"analyzer": "autocomplete",
"field": "name",
"text": "жаб"
}
gives me correct tokens:
{
"tokens": [{
"token": "žab",
"start_offset": 0,
"end_offset": 3,
"type": "<ALPHANUM>",
"position": 0
}]
}
now when i plugin in token žab
into search it works, it is giving me the results.
Explain query:
index/_validate/query?explain
{
"query": {
"match": {
"name": "жаб"
}
}
}
is giving me
{
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"valid": true,
"explanations": [
{
"index": "places_for_search",
"valid": true,
"explanation": "name:žab"
}
]
}
so it is converted into žab
.
but when i have
{
"query": {
"match": {
"name": "жаб"
}
}
}
it does not give me any results but it should, since žab
gives me results.
i even tried forcing analyzer like this:
{
"query": {
"multi_match": {
"query": "жаб",
"type": "bool_prefix",
"fields": [
"name"
],
"analyzer": "autocomplete"
}
}
}
but still, no results.
Just for sake of argument my document looks like:
{
"id": "ChIJT3DV8zM5TRMRlVS4y79AH7A",
"name": "Žabljak"
}
field name
has autocomplete analyzer:
{
"type": "search_as_you_type",
"doc_values": false,
"max_shingle_size": 3,
"analyzer": "autocomplete"
}
My analyzer:
{
"analyzer": {
"autocomplete": {
"filter": [
"autocomplete",
"trim",
"asciifolding",
"lowercase",
"serbian_stemmer",
"russian_stemmer"
],
"type": "custom",
"tokenizer": "standard"
}
}
}
My filters
{
"filter": {
"russian_stemmer": {
"type": "stemmer",
"language": "russian"
},
"autocomplete": {
"type": "edge_ngram",
"min_gram": "3",
"max_gram": "15"
},
"serbian_stemmer": {
"type": "stemmer",
"language": "serbian"
}
}
}
So, bottom line, the analyzer works, but its tokens are not being used in the query. Any idea how to debug this further? I am using es ver 8.13.3