Date filter not doing what it said it did

I have a particularly ruinous issue that is causing a me a lot of confusion.

I have a number of logstash instances that pull out of Redis and insert into Elasticsearch as part of a greater ELK system.

Some events stored in Redis have a Timestamp field, which I would like to parse and put into the @timestamp field of the event (so it has the correct time), removing the now redundant Timestamp field.

Using Logstash 2.4.0 on Linux, I have created a config file to do this:

filter {
    date {
	match => [ "Timestamp", "ISO8601" ]
	remove_field => [ "Timestamp" ]
    }
}

Straightforward. I can validate that the file is being loaded by logstash and the filter is present.

However, resulting events inserted into Elasticsearch retain the Timestamp field, and @timestamp is incorrect (it looks like the date that the event was originally parsed, so its definitely not using the right field).

The debug logs show that the filter is being hit and that it should be removing the Timestamp field and overwriting the @timestamp field, but the end result in Elasticsearch disagrees.

{:timestamp=>"2016-12-09T07:13:11.674000+0000",:message=>"filter received",:event=>{"Timestamp"=>"2016-12-09T17:12:57.8518336+10:00","Severity"=>"INFO","Id"=>"yoqtJYrs8060hlaCHz8BjA","@version"=>"1","@timestamp"=>"2016-12-09T07:13:05.774Z","host"=>"10.250.177.112","port"=>1490,"type"=>"tcp-6379-broker"},:level=>:debug,:file=>"(eval)",:line=>"74",:method=>"filter_func"}
{:timestamp=>"2016-12-09T07:13:11.833000+0000", :message=>"Date filter: received event", :type=>"tcp-6379-broker", :level=>:debug, :file=>"logstash/filters/date.rb", :line=>"311", :method=>"filter"}
{:timestamp=>"2016-12-09T07:13:11.834000+0000", :message=>"Date filter looking for field", :type=>"tcp-6379-broker", :field=>"Timestamp", :level=>:debug, :file=>"logstash/filters/date.rb", :line=>"314", :method=>"filter"}
{:timestamp=>"2016-12-09T07:13:07.209Z", :message=>"Date parsing done", :value=>"2016-12-09T17:13:07.209173+10:00", :level=>:debug, :file=>"logstash/filters/date.rb", :line=>"348", :method=>"filter"}
{:timestamp=>"2016-12-09T07:13:11.835000+0000", :message=>"filters/LogStash::Filters::Date: removing field", :field=>"Timestamp", :level=>:debug, :file=>"logstash/filters/base.rb", :line=>"175", :method=>"filter_matched"}

I am at a loss to explain this. The logs give every indication that it did the thing I expected it to do, but it definitely did not.

The Logstash configuration on the machine is split into 5 files:

10-input-elasticache.conf

input {
    redis {
        host => '#{redis}'
        data_type => 'list'
        key => 'logstash:redis'
        type => 'redis-input'
    }
}

20-filter-replace-dots.conf

filter {
    ruby {
        init => "
                def remove_dots hash
                    new = Hash.new
                    hash.each { |k,v|
                        if v.is_a? Hash
                            v = remove_dots(v)
                        end
                        new[ k.gsub('.','_') ] = v
                        if v.is_a? Array
                            v.each { |elem|
                                if elem.is_a? Hash
                                    elem = remove_dots(elem)
                                end
                                new[ k.gsub('.','_') ] = elem
                            } unless v.nil?
                        end
                    } unless hash.nil?
                    return new
                end
            "
        code => "
                event.instance_variable_set(:@data,remove_dots(event.to_hash))
            "
    }
}

20-filter-timestamp.conf
included above

90-output-elasticsearch.conf

output {
    elasticsearch {
        hosts => '#{elasticsearch-elb}'
    }
}

90-output-statsd.conf

output {
    statsd {
        host => "#{statsd}"
        namespace => "_t_host"
        count => {
            "_t_environment.%{Environment}._t_application.%{Application}._t_component.%{Component}.logstash.events" => "1"
        }
    }
    if ([TimeTaken]) {
        statsd {
            host => "#{statsd}"
            namespace => "_t_host"
            timing => {
                "_t_environment.%{Environment}._t_sourcemodule.%{SourceModuleName}.%{Application}.%{Component}.latency" => "%{TimeTaken}"
            }
        }
    }
}

Ignore the #{something} values, as they are just substituted during deployment via Octopus Deploy.

My guess would be that the ISO8601 specified in your date filter is not matching your actual date string properly.

Try using an explicitly defined regex to check.

Maybe not... your debug certainly sounds like it's trying to do the correct thing...

I've double checked the format of the incoming Timestamp field, and its definitely ISO8601 compatible, so I have absolutely no idea what is going on here.

I worked around the problem by ensuring that the events that go into redis already have an @timestamp field with the appropriate date in it. Basically just making sure that any events that get into the system are already formatted appropriate.

As far as I can tell from the code, the date filter assumes an ISO8601 timestamp to have millisecond precision only. In your example the one timestamp you supplied with only millisecond precision succeeded while the ones with microsecond precision failed, which would seem to support this. You may therefore need to use grok to remove the microsecond part of the timestamp.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.