I have recently started using Logstash so still new at this. The use-case I am working on is that I want to ship a log file with Logstash that has different lengths of lines in it. The length of line (the number of attributes to be extracted) depends on a FLAG which is added right at the start of every line. To elaborate further, I have three flags and the log file contains lines similar to following:
If all types of rows have the same fields (and FLAG2/FLAG3 rows are missing some) then you can use a single csv filter with the ignore_empty_columns set to true.
But it seems like else block is not being running. Just a clarifying question, when we count the number of fields from metadata, does the code count number of fields in the CSV file or in the particular record?
P.S. rows followed by different flags contain different fields altogether i.e. I will have to give them different column names after checking their flags or number of columns.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.