Failed to parse @timestamp from UNIX_MS

Going crazy. Been busy / searching all day, but no success.
I've got this data which I want to import in Elasticsearch with Logstash:

[code]1465309033156,84,http://test.com/purchase.php,200,OK,ThreadGroup 1-3,true,6,6,84,testPC,0
1465309033469,176,http://test.com/,200,OK,ThreadGroup 1-7,true,7,7,176,testPC,91
1465309034108,210,http://test.com/,200,OK,ThreadGroup 1-4,true,7,7,210,testPC,82
1465309054246,85,http://test.com/reserve.php,200,OK,ThreadGroup 1-10,true,11,11,85,testPC,0
1465309054611,90,http://test.com/purchase.php,200,OK,ThreadGroup 1-4,true,11,11,89,testPC,0
1465309055131,91,http://test.com/purchase.php,200,OK,ThreadGroup 1-2,true,11,11,91,testPC,0

[/code]

This is my filter in Logstash:

[code]
filter {

csv {
	columns => ["@jmeter_timestamp", "elapsed", "url", "responseCode", "responseMessage",
	"threadName", "success", "grpThreads", "allThreads", "Latency", "Hostname", "Connect"]

separator => ","
}

date {
locale => "en"
match => [ "@jmeter_timestamp", "UNIX_MS" ]
remove_field => [ "@jmeter_timestamp" ]
target => "@timestamp"
timezone => "Europe/Amsterdam"
}
} [/code]

Part of my json file:

"mappings": { "logs": { "properties": { "@timestamp": { "type": "date" },

But the errors message states:

"error"=>{"type"=>"mapper_parsing_exception",
 "reason"=>"failed to parse [@timestamp]", 
"caused_by"=>{"type"=>"illegal_argument_exception", 
"reason"=>"Invalid format: \"2016-06-07T14:17:34.611Z\" is malformed at \"-06-07T14:17:34.611Z\""}}}}, :level=>:warn}

I thought I stated the match in my filter / date correctly?
The stdout looks like this: http://imgur.com/eWv3Vdw
And in Kopf I see an indice added.

This is ES complaining that it isn't able to parse the timestamp value. What's the actual mapping of the @timestamp field in the index in question?

When I perform the action: curl -XGET 'http://192.168.43.51:9200/_mapping'?pretty

This is my result:

"piet-2016.06.07" : { "mappings" : { "csv" : { }, "logs" : { "properties" : { "@timestamp" : { "type" : "date", "format" : "yyyyMMdd'T'HHmmss.SSS" }, "Connect" : { "type" : "long" }, "ErrorCount" : { "type" : "long" }, "Hostname" : { "type" : "string" }, "Latency" : { "type" : "long" }, "allThreads" : { "type" : "long" }, "elapsed" : { "type" : "integer" }, "grpThreads" : { "type" : "long" }, "label" : { "type" : "string" }, "responseCode" : { "type" : "integer" }, "responseMessage" : { "type" : "string" }, "success" : { "type" : "boolean" }, "threadName" : { "type" : "string" } } } } } }

I think I solved my own question, thanks to you @magnusbaeck

I changed my .json to the following:

"mappings": { "logs": { "properties": { "@timestamp": { "type": "date", "format" : "strict_date_optional_time||epoch_millis"

Note the format.
Now, the data is correctly put in Elasticsearch, and therefor visible in Kibana!

1 Like