Case sensitive search in Kibana

I'm trying to do a case sensitive search in a Kibana watcher as below. However, this seems to pick up all messages with "error" as well. How best can this be handled?

    "body": {
      "query": {
        "bool": {
          "should": [
            {
              "match": {
                "message": "Exception Warn ERROR"
              }
            },

I assume that the message field has been mapped as type text. Without any further analyzers specified, the standard analyser will lowercase the field values during indexing. If you don't want that to happen you'll have to specify a custom analyser on the message field. See the custom analyzer documentation for more details.

Thank you so much. I'll give it a try now.

I started reading through the 'custom analyser documentation' from your post above. Rookie question - where exactly do I add the custom analyser? The closest thing seemed to be Management > Elasticsearch > Index Management > "One of the Indeces" > Edit Settings. However, not sure if that is correct thou'.

  "settings": {
"analysis": {
  "analyzer": {
    "rebuilt_standard": {
      "tokenizer": "standard",
      "filter": [
        "standard"      
      ]
    }
  }
}

}

After a bit more playing around this is what I've got to.

Under Management > Elasticsearch > Index Management > select the index > Edit Settings and

  1. Closed the index
  2. Added the following 2 entries
    "index.analysis.analyzer.rebuilt_standard.filter": ["standard"],
    "index.analysis.analyzer.rebuilt_standard.tokenizer": "standard"
  3. Opened the index
  4. Various combinations of clear index cache, flush index, refresh index, & force merge index.

Now when I try to simulate the watch it still is doing case insensitive search. Any pointers on what I could be missing?

Hi @Bargs, @Magnus_Kessler, Elastic team,
Could you help with this please?

Please don't use @name with forum members who haven't been part of a disussion thread, yet.

To answer your question about where the custom analyzer needs to be configured:

You install the analyzer in the settings of a given index. From the documentation:

PUT my_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_custom_analyzer": {
          "type":      "custom",
          "tokenizer": "standard",
          "char_filter": [
            "html_strip"
          ],
          "filter": [
            "lowercase",
            "asciifolding"
          ]
        }
      }
    }
  }
}

Then, for each field that should be analyzed with the custom analyzer, you also have to configure the analyzer in the mapping:

PUT /my_index
{
  "mappings": {
    "_doc": {
      "properties": {
        "message": { 
          "type": "text",
          "analyzer": "my_custom_analyzer"
        }
      }
    }
  }
}

Make sure to replace the document type _doc above with the document type you actually use.

1 Like

Thanks. Will give it a try now. Will this work against the existing entries as well or only against those that will be created after the custom analyzer was created?

If you want to change the mappings for existing fields, you will have to reindex your data. I suggest you create a new index with your custom analyzer and configure the mapping to use this analyzer as shown. Then you can start indexing new data into this index, or use the Reindex API to reindex existing data. You may want to look into creating an alias, so that your client can transparently use the new index.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.