How to process a large csv file and add custom Python code to transform few columns to index into ElasticSearch?

Hi,

I have a large csv file with a million rows and ~50 columns. I want to add an extra two columns and the values in these two columns is based on analysing the context on the existing few columns using some kind of text processing. Lets say, if the column 1 and column 2 contain a particular keyword in them then I insert a specific value in the extra two columns.

Is this possible in LogStash? if yes, how can we add some custom Python code that takes a decision on what the extra two columns should contain for every row?

Is there a tutorial somewhere please?

Thanks
Abhishek S

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.