I have a CSV file with multiple entries in the following format (the headers are not included in the actual data):
Time, FT Data (2048 entries), SNR, Frequency
430201865, 1000, 2000, 8500,... 4.50, 1266.255
280201865, 3500, 1400, 1750,... 12.75, 5548.127
It is a normal CSV file, except that the FT Data column contains 2048 comma separated data points in it, making it difficult to parse.
Is there a way with logstash to parse this or split it so that it knows to parse the first 2 columns and the last 2 columns as normal, but then each value in-between is its own document? I also need to apply a conversion to each of these data points as well (for example multiply value by 2).
The expected output from the above in ES would be:
One finally question, I've found out I need to do some additional calculations to work out a new field called frequency. This is determined by using the first value in the ft array and performing a calculation on it, and then incrementing that value by another number 2048 times for each item in the array.
At the minute I've got it to a point where I now have two arrays ft_data and frequencies. Logstash doesn't seem to let me split on both of these, is there a way to get each element from these into their own event?
For examples given the previous example I gave, the new output would look like
Where altered_frequency is just the original first value of the frequency field (1266.255) and you accumulatively add 1000 it. See updated ruby code below, which works but then I'm not able to split on the two fields power and frequencies