Is it possible skip columns from csv file while uploading data into elasticsearch?

I have around 50 columns in csv but only 10 are useful for me. Is there any way that when I upload the data, only data for that 10 columns should be inserted in elasticsearch and only those 10 fileds should appear in kibana. Data for the remaining fields should not be uploaded.

Are you supplying the names of the columns in the columns option on the csv filter? If so, make the column names you want to delete match a pattern and then use a prune filter with a blacklist_names option.

Thanks @Badger!! will give it a try.
One more thing can the column names in logstash can be different from that in csv?
I am not using dynamic mapping as i have created the mapping for required fileds while creating index/

Yes, if your csv file has a header row that contains the column names you can use that to set them, but you do not have to. You can use the columns option to set the column names to other values.

something like this
csv {
separator => ","
columns => ["C1","C2",C3","C4"]
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.