_dateparsefailure error despite appearing correctly


(JW) #1

Hi All,

I've been testing out the very basics of ELK and I was able to get an index going by reading a simple CSV. As a test, I tried to do ANOTHER index with the same timestamp format, but different columns and it gave me a _dateparsefailure. I deleted the whole logstash, tried using the same CSV as the original and it still complained the same.

Any pointers would be appreciated. Below is my sample logstash.conf and CSV:

input {
file {
path => "PiT.csv"
start_position => "beginning"
}
}
filter {
csv {
separator => ","
columns => ["DATE","QUEUE","MAX","JLU","PEND","RUN","SUSP"]
}
date {
match => ["DATE","MM/dd/YYYY HH:mm:ss"]
target => "@timestamp"
}
mutate {convert => ["MAX", "float"]}
mutate {convert => ["JLU", "float"]}
mutate {convert => ["PEND", "float"]}
mutate {convert => ["RUN", "float"]}
mutate {convert => ["SUSP", "float"]}
}
output {
elasticsearch {
action => "index"
hosts => ["localhost:9201"]
index => "queue2"
workers => 1
}
stdout {}
}

CSV

DATE,QUEUE,MAX,JLU,PEND,RUN,SUSP
07/06/2017 10:05:16, x,3000,2000,0,500,0
07/06/2017 10:05:16, y,1100,1000,0,58,0
07/06/2017 10:05:16, z,4500,1600,92,1328,20
07/06/2017 10:10:19, a,3000,2000,0,478,0
07/06/2017 10:10:19, b,1100,1000,0,58,0
07/06/2017 10:10:19, c,4500,1600,84,1328,20
07/06/2017 10:15:32, a,3000,2000,0,440,0
07/06/2017 10:15:32, c,1100,1000,0,58,0
07/06/2017 10:15:32, d,4500,1600,76,1248,20


(Magnus B├Ąck) #2

When Logstash adds the _dateparsefailure tag is also logs a message to indicate why it fails.


(JW) #3

Hi Magnus,

How do I find out where and what the message is?


(Christian Dahlqvist) #4

The first line containing headers will be the first event processed, and this should cause a _dateparsefailure, so you may need to drop this. You should be able to check if the first column contains the text DATE and then use a drop filter.


(JW) #5

Hmm, well if you see my initial post, it shows that the column is there. Also, this is the exact same conf that I have going now to another index called "queue", and the format has not changed at all. The source (CSV) is different though.


columns => ["DATE","QUEUE","MAX","JLU","PEND","RUN","SUSP"]
}
date {
match => ["DATE","MM/dd/YYYY HH:mm:ss"]

DATE,QUEUE,MAX,JLU,PEND,RUN,SUSP
07/06/2017 10:05:16, x,3000,2000,0,500,0


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.