Csv filtering misses 2 first columns

i'm having a csv file with n columns in the format of:
1Col,2Col,.........,n-1Col, nCol

all my data is parsed fine with the right headers names.
all except the 2 first columns.
the 2 first columns are shown in kibana as "column1" and "column2".

i cant figure out the problem.
i tried deleteing all the previous data that i entered that had different formats of headers and still no success.

my config file is:

input {
beats {
client_inactivity_timeout => 1200
port => 5044
}
}
filter {
grok {
break_on_match => false
match => { "source" => "^.?(?[^\/]).csv$" } # filename parsing
}
csv {
autodetect_column_names => true
autogenerate_column_names => true
skip_empty_columns => true
}
mutate { add_field => { "[@metadata][filename]" => "%{filename}" } }
mutate { lowercase => [ "[@metadata][filename]" ] }
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][filename]}"
document_type => "%{[@metadata][type]}"
user => "elastic"
password => "1234"
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.