Issue with Edge NGram Tokenizer in elastic search

i am using a edge ngram tokenizer in order to provide partial matching .
My document looks like

Name
Labson series LTD 2014
Labson PLO LTD 2014A
Labson PLO LTD 2014-I
Labson PLO LTD. 2014-II

My mapping is as follows

PUT my_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "autocomplete": {
          "tokenizer": "autocomplete",
          "filter": [
            "lowercase"
          ]
        },
        "autocomplete_search": {
          "tokenizer": "lowercase"
        }
      },
      "tokenizer": {
        "autocomplete": {
          "type": "edge_ngram",
          "min_gram": 2,
          "max_gram": 40,
          "token_chars": [
            "letter",
            "digit"
          ]
        }
      }
    }
  },
  "mappings": {
    "doc": {
      "properties": {
        "title": {
          "type": "string",
          "analyzer": "autocomplete",
          "search_analyzer": "autocomplete_search"
        }
      }
    }
  }
}

PUT my_index/doc/1
{
  "title": "Labson Series LTD 2014" 
}

PUT my_index/doc/2
{
  "title": "Labson PLO LTD 2014A" 
}


PUT my_index/doc/3
{
  "title": "Labson PLO LTD 2014-I" 
}


PUT my_index/doc/4
{
  "title": "Labson PLO LTD. 2014-II" 
}

My query is as follows :This query give me 3 documents which is correct (Labson PLO LTD 2014A
Labson PLO LTD 2014-I,Labson PLO LTD. 2014-II)

GET my_index/_search

{
  "query": {
    "match": {
      "title": {
        "query": "labson plo", 
        "operator": "and"
      }
    }
  }
}

But when i type in Labson PLO 2014A it gives me 0 documents
GET my_index/_search

{
  "query": {
    "match": {
      "title": {
        "query": "Labson PLO 2014A", 
        "operator": "and"
      }
    }
  }
}

I expect this to return me 1 document Labson PLO LTD 2014A ,for some reason it seems like it is not indexing the digits in token .Let me know if i am missing thing anything over here .Thanks for the help.

The lowercase tokenizer, which you are using for searching, splits on non-letters: https://www.elastic.co/guide/en/elasticsearch/reference/2.3//analysis-lowercase-tokenizer.html

Try instead using the keyword tokenizer with a lowercase token filter?

Mike McCandless

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.