What happens with analyzed fields if the length of indexed field is longer than the "max_gram" of an edge_ngram?

What happens if there is an email entered that's longer than 15 characters? In our tests, it appears to stop using edge_ngram, which make sense since it's a "starts with" analyzer but just wanted to be sure on what happens.

Since we don't know how long emails can really be, is it better to use a different analyzer for an email or would it be easier to just bump the "max_gram" to be a higher value?

Mapping field

"email": {
    "type": "text",
    "index_options": "offsets",
    "analyzer": "min_1_max_15_prefix_string_partial_match_index_analyzer",
    "search_analyzer": "min_1_string_partial_match_search_analyzer"
}

Analyzer used upon indexing the field

"min_1_max_15_string_edgeNGram_tokenizer": {
    "token_chars": [
        "letter",
        "digit"
    ],
    "min_gram": "1",
    "type": "edge_ngram",
    "max_gram": "15"
}

The search analyzer used for that field

"min_1_string_partial_match_search_analyzer": {
        "filter": [
            "standard",
            "lowercase"
        ],
        "char_filter": [
            "period_mapping_character_filter"
        ],
        "type": "custom",
        "tokenizer": "standard"
    }

The char_filter for the above search analyzer

 "char_filter": {
        "period_mapping_character_filter": {
            "type": "mapping",
            "mappings": ".=>@"
        }
    }

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.