How to filter the uppercase fields in Kibana

In my Discovery View i have following columns

First Name
Last Name
Email
Phone

Now in "First Name" fields i have some records with uppercase and some with lowercase.
How can i filter records in Discovery page with

  1. all records with First Name in uppercase
    2 all records with First Name in lowercase

You can use regular expressions to accomplish this:

first_name:/[a-z]+/
first_name:/[A-Z]+/

But it is going to depend very much so on your elasticsearch mapping. By default elasticsearch will lowercase the value that it indexes, though you will still see it uppercased in _source. You may need to create a custom analyzer that doesn't lowercase your data

Hi

I tried this in Discover Page on top Search Panel , it gives No results found
first_name:/[a-z]+/
first_name:/[A-Z]+/

But i have records with Uppercase and lowercase.

Hi All

Since community members is increased in past few month,i think someone would have faced this problem. How can i get a upper case for a given field from search pannel of Kibana

I tried what Rashid told but its not working , even in Kibana 4.1.1.

Please let me know if you anyone has tried out this

first_name:/[A-Z]+/

Regards
Ritesh

You should check how your data is indexed into Elasticsearch. Standard analyzer by default normalizes everything in lower-case for purposes of full-text search: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-standard-analyzer.html

Hi

Even lower case is also not working, it shows no result. Do we have any other regular expression to retrieve lowercase for a given field, I am using this

first_name:/[a-z]+/

Thanks!

Regards
Ritesh

Strange, using regex syntax works for me (see screenshot). I'm using the standard analyzer.

I see two problems here

  1. We are using [a-z], then it should give only lower case , where as "China" is a camel case/title case. Why we are adding upfront the word "China", as we need to dig out the lowercase records from field "geoip.country_name" field

  2. Why we need the wildcard * here?

It will be very helpful, if you clarify. As yesterday i read and tried whole Lucene queries, but nothing worked for me

Regards
Ritesh

I would guess the problem is that the field on which you're trying to do the regular expression is mapped as "not_analyzed". It appears that regular expressions in Lucene only work on analyzed fields.

My test setup below works if the "message" field is set to analyzed (which is the default), but does not work if I set it to "not_analyzed".

curl -XPOST 'http://localhost:9200/test2' -d '{
    "mappings" : {
        "test" : {
            "properties" : {
				"post_date" : { "type" : "date"},
                "message" : { "type" : "string" }
            }
        }
    }
}'

curl -XPUT 'http://localhost:9200/test2/test/1' -d '{
    "post_date" : "2015-11-15T14:12:12",
    "message" : "ABC"
}'

1 Like

Yes you are correct, all my string fields are not_analyzed

Very strange whats the point of checking regular expression in analyzed fields as it will breakdown the words like "foo bar" into "foo" and "bar". So no need of regular expression

The whole concept of Elastic is rely on search engine and if we cannot use regular expression on a set of string fields then i think whole concept of ES is gone for a toss.

I am just hitting the basics here.....

Anyways thanks for your help...

Regards
Ritesh

Agreed - there is some complexity here. Some users use the default Logstash indexing template to index two versions of key fields, one analyzed and one raw, for that reason.