Import double array data from csv file

Hello I'm new to kibana and I've got two questions about visualizing double array data.
I have a csv file including time series sensor data which consists of double array.

timestamp | sensor_id | sensor_data
2019-10-17 10:00:00.030 | sensor_001 | [1.2310, 1.2230, 1.1334, ....1.4421]
2019-10-17 10:00:01.451 | sensor_001 | [1.2320, 1.1930, 1.1344, ....2.0003]
2019-10-17 10:00:02.003 | sensor_001 | [1.2220, 1.6543, 1.1356, ....1.8544]

Each timestamp has approximately 250 elements of double arrays.
I want to visualize time-series scatter plot using this csv file(comma delimited) to show the distribution of the sensor data by timestamp.
Is that possible? What is the most effective way to do this?

And, if the size of array is dynamic(different) for each row, like this,
timestamp | sensor_id | sensor_data
2019-10-17 10:00:00.030 | sensor_001 | [1.2310] -> size : 1
2019-10-17 10:00:01.451 | sensor_001 | [1.2320, 1.1930, 1.1344, 2.0003] -> size : 4
2019-10-17 10:00:02.003 | sensor_001 | [1.2220, 1.6543, 1.8544] -> size : 3

Can we still visualize this data?

Yes, you can visualize this data in Kibana. Are you looking for help getting the data into Elasticsearch or visualizing it after it's in Elasticsearch?

I'm mainly looking for help getting the data into Elasticsearch by logstash.conf, but I'm also interested in importing data by ' Visualize data from a log file' in Machine Learning menu.

back to first option, (using logstash.conf)

I've got the csv data like this,

UNIX_TIME,TENANT_ID,TAG_ID,TAG_VALUE
1560506401,T00,GTW01,[103.1542,103.8374,103.8374,103.8374,103.6148,103.6148,103.6148]
1560506402,T00,GTW01,[103.1542,103.8374,103.8374,103.8374,103.6148,103.6148,103.6149]
1560506403,T00,GTW01,[103.1542,103.8374,103.8374,103.8374,103.6148,103.6148,103.6150]
1560506404,T00,GTW01,[103.1542,103.8374,103.8374,103.8374,103.6148,103.6148,103.6151]
1560506405,T00,GTW01,[103.1542,103.8374,103.8374,103.8374,103.6148,103.6148,103.6152]

I've used logstash.conf like this,

input {
  file {
path => ["C:/Users/myfolder/Desktop/array_test_01.csv"]
start_position => "beginning"
sincedb_path => "nul"
codec => json {
  charset => "ISO-8859-1"
}
  }
}

filter {
	csv {
		autodetect_column_names => true
	}
	mutate {
	  strip => ["TAG_VALUE"]
		gsub => 
			["TAG_VALUE", "\]|\[", "" ]
		        
		split => { "TAG_VALUE" => "," }
		convert => {
		"TAG_VALUE" => "float"
		           }
	}

}

output {
  elasticsearch {
hosts => "http://localhost:9200"
index => "array_test"
  }
  stdout {}
}

what I want to do is like,

  1. remove square brackets back and forth,
  2. also removes line break at the end of each row,
  3. split data by comma(,)
  4. and finally convert data type as float
    That's all.

But when I execute logstash, I've got the error like this,
][logstash.filters.csv ][main] Error parsing csv {:field=>"message", :source=>"1560509901,T00,GTW01,[103.1542,103.8374,103.8374,103.8374,103.6148,103.6148,103.8555]\r", :exception=>#<RuntimeError: Invalid FieldReference: [103.1542>}

It seems like square brackets were not removed but I'm not sure.

Can you give me any advice?
Also, can I import data by machine learning's 'Visualize data from a log file' function as well?
Thanks in advance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.