In my old ES 2.x index mapping, I had a custom analyzer to support case insensitive keyword search:
{
  "settings": {
      "analyzer": {
        "lowercase_keyword": {
          "type": "custom",
          "tokenizer": "keyword",
          "filter": "lowercase"
        }
      }
  },
  "mappings": {
    "type": {
      "properties": {
        "city": {
          "type": "string",
          "analyzer": "lowercase_keyword"
        }
      }
    }
  }
}
Now in ES 5.x, string is replaced with "text" and "keyword", then I have two options to implement case insensitive mapping.
- Use same lowercase_keyword analyzer approach as in ES 2.x, but change "string" to "text" in field mapping
 - Use new "keyword" type with new normalizer concept as follows:
 
{
  "settings": {
    "analysis": {
      "normalizer": {
        "lowercase_normalizer": {
          "type": "custom",
          "char_filter": [],
          "filter": ["lowercase"]
        }
      }
    }
  },
  "mappings": {
    "type": {
      "properties": {
        "city": {
          "type": "keyword",
          "normalizer": "lowercase_normalizer"
        }
      }
    }
  }
}
Which one is better? From functionality point of view, I think no difference. I am wondering whether there is performance difference between these two approaches?