Why am I getting _dateparsefailure in Logstash?

Hi,

I am trying out ELK GA 6.3. Using Logstash jdbc plugin, I am fetching rows from my oracle table. Below is my complete configuration;

input {
	jdbc {
		jdbc_driver_library => "/Logstash/ojdbc6.jar"
		jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
		jdbc_connection_string => "jdbc:oracle:thin:@192.168.0.1:1521:user01"
		jdbc_user => "user01"
		jdbc_password => "user01"
		statement => "SELECT field_a,field_b,field_c,timefield FROM table_a where rownum <= 1"
		add_field => { "logtype" => "my_dblogs" }
	}	
}

filter {
	if[logtype] == "my_dblogs"{
		date {
			match => [ "timefield", "ISO8601" ]
			timezone => "GMT"
			target => "@timestamp"
		}	
	}
}

output{
	stdout{ codec => json }
}

below is my json output;

{
	"field_a": "dummy",
	"field_b": "dummy",
	"field_c": "dummy",
	"@version": "1",
	"tags": ["_dateparsefailure"],
	"logtype": "my_dblogs",
	"timefield": "2017-06-09T01:23:00.357Z",
	"@timestamp": "2018-06-19T11:48:25.333Z",
}

timefield is a date_time field and is mapped like that in elasticsearch without any problem. As you can see, I am trying to assign it to @timestamp using date filter, and it creates a _dateparsefailure. I have provided ISO8601 as described in the doc. I have also tried yyyy-MM-ddTHH:mm:ss.SSSZ, but it also creates error like [FATAL][logstash.runner ] An unexpected error occurred!.

Why is this happening? How can I fix this?

Thank you.

@elasticheart, I don't see any problems with the configuration of your date filter. I set up a test case that eliminates the jdbc input since that should not affect the date parsing:

input {
  generator { count => 1 }
}

filter {
    mutate {
        add_field => { "timefield" => "2017-06-09T01:23:00.357Z" }
    }
    date {
        match => [ "timefield", "ISO8601" ]
        timezone => "GMT"
        target => "@timestamp"
    }
}

output {
  stdout { codec => json }
}

I get this output:

{
    "message":"Hello world!",
    "host":"myhostname",
    "sequence":0,
    "timefield":"2017-06-09T01:23:00.357Z",
    "@version":"1",
    "@timestamp":"2017-06-09T01:23:00.357Z"
}

Does that work for you? If not, what OS are you on?

As you are using data from the JDBC input, that field is possibly already a date and not a string, which is what the date field expects. If you output the event to stdout with a rubydebug codec you should be able to see if it is printed surrounded by ", which I believe would indicate it is a string.

@Christian_Dahlqvist as you have said, the field is a date and not a string. I want to cast it to @timestamp. How can I do that?

@danhermann there wont be any problem with your example, since you are creating a string field. In my case, the field is already a date field, and I want to assign it to @timestamp.

@elasticheart, you (and Christian) are right. In order to copy the existing date value, you can use the
mutate filter with the copy option.

One option might be to modify your SQL to read it out as a correctly formatted string. If you need to process it using a date filter, e.g. to change timezone, you may want to add it as a string field and the run the date filter based on that.

@danhermann it works! But could you kindly tell me which one had good performance and utilize less CPU. A) Querying my timefield as string (using to_char of oracle) and using date filter, or B) using mutate filter?

@Christian_Dahlqvist I agree. But could you kindly tell me which one had good performance and utilize less CPU. A) Querying my timefield as string (using to_char of oracle) and using date filter, or B) using mutate filter? There is no need to change timezone because everything is in GMT.

I do not know. From a Logstash perspective reading it out as a correctly formatted string requires less processing, but that instead requires your database to do that work. In all I would expect this to have little impact on throughput.

1 Like

@elasticheart, retrieving the date from the database as a date and just copying it to a new field is faster than converting from the string representation to a date although the performance difference between the two is pretty small.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.