I have a log file which is in csv format and I need it to parse to elastic search using filebeat with the fields like IP, Client.OS, url, datafield etc in that line of csv file.
I have read that I have to use logstash to parse and enhance the data fields. But not able to do that.
So kindly please tell me the steps how to achieve this?
I had a similar situation. I sorted it out for time being using a combination of split and script processors in the elasticsearch pipeline.
Here is a sample ingest pipeline you can use as starting point. This one is written with a 3 field csv in mind. It will store the values in three fields called ApplicationId, Level and Error.
There are some edge cases you have to be careful of. I wrote a small blog entry on the same. I am waiting for an official csv processor to be released for pipelines. Once that happens we can avoid all this hacking around.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.