Multiple tokenizers inside one Custom Analyser in Elasticsearch

I am using Custom NGRAM Analyzer which has a ngram tokenizer. I have also used lowercase filter. The query is working fine for searches without characters. But when I am searching for certain symbols, it fails. Since I have used lower case tokenizers, Elasticsearch doesn't analyse symbols. I know whitespace tokenizer can help me solve the issue. How can I use two tokenizers in a single analyzer?Below is the mapping:

    {
      "settings": {
        "analysis": {
          "analyzer": {
            "my_analyzer": {
              "tokenizer": "my_tokenizer",
              "filter": "lowercase"
            }
          },
          "tokenizer": {
            "my_tokenizer": {
              "type": "ngram",
              "min_gram": 3,
              "max_gram": 3,
              "token_chars": [
                "letter",
                "digit"
              ]
            }
          }
        }
      },
      "mappings": {
        "_doc": {
          "properties": {
            "title": {
              "type": "text",
              "analyzer": "my_analyzer"
            }
          }
        }
      }
    }

Is there a way I could solve this issue?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.