Field turns into text in elasticsearch after being mapped as INT in grok and being mutated into integer

Hi everyone!
I have logstash filter (grok) and I matched certain phrases as INT. When I open the mapping in kibana, I see that elasticsearch classified the INT as text. Why is that?
How do I change it? I want to make number based aggregations (like average), so I have to save these values as a number.

Log example:
13/11/2017 10:31:15:664 - [logReaderThread] WARN LogReader - Parser has a lag of [1984] seconds above a pre-defined threshold.

Grok filter:

%{DATE:date} %{TIME:time} - \[%{DATA:stream_name}\]  %{LOGLEVEL:log_level}%{GREEDYDATA:msg}(?<=lag\sof\s\[)%{INT:lag_sec}

And the mapping:

"lag_sec": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }

And it's the same for every INT in the grok filter.
Any ideas?

Question moved to #logstash.

You are just telling GROK that the field you are expecting is looking like a INT. But Grok extracts Strings by default.

You can use a mutate filter to change your field to an integer:

filter {
  mutate {
    convert => { "lag_sec" => "integer" }
  }
}
1 Like

Hi, Thanks for the answer!
I did what you suggested (used mutate on the fields I want to convert from text to int), I tested the new logstash config (because I use mutate on many fields, not only "lag_sec") and the test was OK, I started kibana and refreshed the index, but when I check the mapping:

      "insertsNum": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "lag_sec": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "documents_loaded__time_ms": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "fetchTime_ms": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      }

etc...

The mutate filter:

mutate {
        convert =>{
                    "lag_sec" => "integer"
                    "parsedNum" => "integer"
                    "total_parsed_time_ms" => "integer"
                      .
                      .
                      .
         }
}

Plus, I need to add that I use the 6.1 elastic stack.
Plus 2: Weirdly enough, there is one field that does get saved in elasticsearch as long.

So what am I doing wrong?

anyone?

I solved it, and this is what I did for anyone who encounter the same thing:
The mutate filter didn't work, but we can make grok save a field not as string (as it does by default) just by adding :FORMAT after the semantic.
For example:

grok{
    match => {"message" => "%{INT:someNum}"}  ===> is a string
}

grok{
    match => {"message" => "%{INT:someNum:int}"}  ===> is a int
}

That did the trick for me.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.