Hi,
I have this logstash .conf file:
input {
file {
path => "/eee/*.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
autodetect_column_names => true
}
}
And multiple .csv files, containing different data, all of them need to be placed in the same index.
Example, file1.csv contains:
Name,City,Date,Comment
Josh,city1,2022-01-02,active
John,city2,2022-04-29,passive
And file2.csv contains different headers:
Name,first_seen,last_seen,favorite
Josh,2020-04-05,2022-01-02,yes
John,2019-05-05,2022-04-29,no
And when I open index in Kibana, after indexing in elasticsearch I see that all the header names were taken from the first file
So, from file2.csv I see:
Name: Josh, City: 2020-04-05, Date: 2022-01-02, Comment: yes
How can I avoid this problem? Big thanks in advance. Using logstash 8.7.1, pipeline.workers is set to 1.