Mutate plugin in Grok Filter

(Akarsha) #1

I want one of my field to be converted to "integer". I tried using mutate option in grok, however, not getting the change reflected in kibana, also not getting any exception in logstash logs.

config file:
filter {

if [type] == "provider" {

 if "Outbound" in [message] {
grok {
   match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \| %{DATA:api-status} \| %{DATA:transaction-id} \| %{DATA:server-name} \| %{DATA:bundle-name} \| %{DATA:workqueue} \| %{DATA:handler} \| %{DATA:service-name} \| %{DATA:api-name} \| %{DATA:application-id} \| %{DATA:system-id} \| %{DATA:username} \| %{TIMESTAMP_ISO8601:consumer-ref-timestamp} \| %{DATA:consumer-ref-id} \| %{DATA:csr-id} \| %{DATA:user-id} \| %{DATA:language-code} \| %{DATA:country-code} \| %{GREEDYDATA:outbound-msg}" }



else if "Inbound" in [message] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} | %{DATA:api-status} | %{DATA:transaction-id} | %{DATA:server-name} | %{DATA:bundle-name} | %{DATA:workqueue} | %{DATA:handler} | %{DATA:service-name} | %{DATA:api-name} | %{DATA:application-id} | %{DATA:system-id} | %{DATA:username} | %{TIMESTAMP_ISO8601:consumer-ref-timestamp} | %{DATA:consumer-ref-id} | %{DATA:csr-id} | %{DATA:user-id} | %{DATA:language-code} | %{DATA:country-code} | %{DATA:in}.%{DATA:msg}.--->.%{DATA:id}(%{DATA:http}.%{POSINT:code}.-.%{DATA:state}).%{DATA:and}.%{DATA:took}.:.%{INT:response}" }
convert => ["%{[response]}", "integer"]


Also tried below:
if [response] {
convert => [ [response], "integer"]

Here, I the 'response' field appears as "string" in kibana, I wish to convert it to a NUMBER, but failing to do so. Kindly help.
ELK version used is 5.1.1 Thanks in advance

(Guy Boertje) #2

Grok already has this functionality

Optionally you can add a data type conversion to your grok pattern. By default all semantics are saved as strings. If you wish to convert a semantic’s data type, for example change a string to an integer then suffix it with the target data type. For example %{NUMBER:num:int} which converts the num semantic from a string to an integer. Currently the only supported conversions are int and float.


(Akarsha) #3

I tried that just now, still in kibana, my field is showing as string. I have also restarted logstash, elasticsearch, but no luck.

(Guy Boertje) #4

does the index mapping convert it back to a string?

(Akarsha) #5

how can I check that?

(Guy Boertje) #6

If you don't supply a custom index mapping template, ES/Kibana uses a default one.

(Akarsha) #7

I tried the conversion for new set of input and it worked!. Thanks much! :slight_smile:

(Akarsha) #8

I need to ignore few lines in my log file, how can I ignore/not include those lines? I think there some 'ignore' command, not sure on this. Kindly suggest some.

(Guy Boertje) #9

the ignore config option on the file input is used to ignore files by pattern.

You can use the drop filter with an if conditional and the event from the lines you don't want will be dropped from the output stages.

if [some_field] == "some value" {
  drop {}

(Akarsha) #10

Thanks for the help.
In my kibana, I am seeing the hits for the logs not before than 1 hour. Although the log filter, I have configured is *.log, so ideally logstash should process the current logs. Is there any parameter in the logstash/elasticsearch or kibana config, which I need to update for latest log hits. I am also using Filebeat as a data shipper. Thanks in advance.

(system) #11

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.