Logstash sending date instead of string

I am using grok filter to parse logs.In that there is a "time" field for which I am using "TIMESTAMP_ISO8601" as the grok pattern.
By default logstash should send this "time" field as "string" but it is sending it as date.
I want that logstash should send this "time" field as string

The "time" field has the following format:--2017-11-08T12:27:21.000Z

Please help me with this.

1 Like

use mutate convert for changing data type.

Make sure you have index mapping for "time" field as text instead of date.

It's not clear what you mean. When Logstash sends events to Elasticsearch (if that's what you're talking about) they're sent as JSON. There is no data type for timestamps in JSON so timestamps can only be represented as strings or numbers.

Instead of describing the situation try giving a concrete example.

1 Like

I tried using mutate to convert "time" column into "string".but that did not work

1 Like

These are the contents of my conf file:--

input {
  beats {
    port => LOGSTASH_PORT
  }
}

filter {

        grok {
            patterns_dir => "/home/application/bhavya/patterns/"
            match => { "message" => "\<%{USER:hField1}\>%{SPACE}%{IPV4:hIp1}%{SPACE}%{WORD:hHostName}%{SPACE}%{TIMESTAMP_ISO8601:time},%{IPV4:clientIp}"}
        }

        mutate {
            # Original message has been fully parsed, so remove it.
            #remove_field => [ "message" ]
        }

        ruby {
            code => "event.set('gsi_ts', event.get('@timestamp').to_i)"
        }
}

output {
    if "_grokparsefailure" in [tags] {
        elasticsearch {
            hosts => ["localhost:9200"]
            index => "error-bhavya-%{+yyyy.MM.dd}"
            document_type => "parsing_failure"
        }
    } else {
        elasticsearch {
            hosts => ["localhost:9200"]
            index => "bhavya-%{+yyyy.MM.dd}"
            document_type => "log"
        }
    } 
}

And my input logs are of the format:--

<01910> xx.xx.xx.xx  xyz 2017-11-08T12:27:21.000Z,xx.xx.xx.xx

GET bh*/_mapping

Output in kibana:--

So instead of getting "type" for "time" field as "text" , I am getting "date"

That's because Elasticsearch autodetects the field as a date field based on what the string looks like. If you don't want that you can use an index template to force the time field to be a string.

Does turning off dynamic mapping helpful in this case??

If you disable dynamic mapping you have to define all fields upfront, before you index any documents. So yes, it solves your problem but it does a lot more too.

If logstash sends all the fields to elasticsearch as strings,then I think disabling dynamic mapping would work,wouldn't it??

I don't think you quite understood what I wrote and I don't know how to explain it differently without just repeating myself. I suggest you try things out yourself and discover what works and what doesn't.

@jainbhavya53 Modify the grok pattern and incorporate the mutate and date filter similar to below:

    grok {
        patterns_dir => "/home/application/bhavya/patterns/"
        match => { "message" => "\<%{USER:hField1}\>%{SPACE}%{IPV4:hIp1}%{SPACE}%{WORD:hHostName}%{SPACE}%{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?,%{IPV4:clientIp}"}
    }
    mutate { 
        add_field => {"time", "%{HOUR}:%{MINUTE}:%{SECOND}" } 
        add_field => {"dttm", "%{YEAR}-%{MONTHNUM}-%{MONTHDAY}T%{HOUR}:%{MINUTE}:%{SECOND}.%{ISO8601_TIMEZONE}" }    
    }
    date {
        match => [ "dttm", "ISO8601" ]
    }

This way you can retain 'time' as a string field and still have @timestamp replaced with the datetime value in the log.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.