Hi All,
I am a bit new to ELK and have been researching this for a bit, and most forums and links I found indicate this is not possible. Happy if there is a thread open/closed for me to review as well.
Scenario:
Running Kibana 7.1.1 on RHEL7
I have a python script that is query several databases and doing counts and then logging this into a csv.
Example CSV output: (timestamp,region, datatype,count of rows)
2020/03/01 10:00,US,clients,10
2020/03/01 10:00,US,vendors,12
2020/03/01 10:00,US,warehouses,3
2020/03/01 10:00,CA, vendors,10
2020/03/01 10:00,CA,clients,10
2020/03/01 10:00,CA, warehouses,10
2020/03/01 10:05,US,clients,10
2020/03/01 10:05,US,vendors,12
2020/03/01 10:05,US,warehouses,3
2020/03/01 10:05,CA, vendors,10
2020/03/01 10:05,CA,clients,10
2020/03/01 10:05,CA, warehouses,10
what I'm trying to achieve is that for kibana users plot the Data which was already aggregated - each datatype by count (X-axis) over the time series (Y-axis)
I see my data correctly when querying via kibana UI; however when I try to create the metric on the kibana dashboard the obvious maps-aggregations, but I just want to plot the actual values.
Any suggestions on how to do this? should I change how I input?
I don't want to log all data, we are talking billions and by the end of the day, trillions of rows across all databases which is why I am aggregate during data fetch as opposed to logstash fetching and letting kibana. my hardware is 2x 768gb memory 72core 2TB ssds. So I do have compute power but don't want to waste disk.