I'm trying csv filter (as they originally comes from csv), but it is not working well,as it doesn't understand that there is only 2 features.
I know that grok filter can do a lot of things, but i don't know how to configure it so that logstash understand that there is 2 repeated features only.
You can try mutate split. It'll create an array [timestamp1,value1,timestamp2...] then you can remove the even indexes which match with the timestampX field's name and have [value1,value2,...]. But what about your output ?
It's already like a CSV file. If you create a .csv file with those datas, you'll have a CSV file if you choose , as separator no ? So you don't need the csv filter plugin exceptf if you want to add the columns.
What do you want is to transform a string with a pattern to a CSV file right ? So your mutate clause is sufficient to do what you want or I miss something ^^'
I don't want to create a csv file, I just want to parse the data as if they were a csv file, because I have 2 repeated features.
If they is another way than csv, i would be happy to try it.
But the output is into elasticsearch, and i need to do it for streaming events that will come into logstash.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.