Hi there, my team and I have some troubles visualizing test run data.
I think we still have not found the correct way to do so.
Here is the setup:
The system we test spits out one json-object per second, containing a timestamp and various run data.
The first attempt we did was to upload every output individually. So 1 request per single line, which lead to several thousands of (small) documents per run.
These could be easily visualized, using the timestamp in each object as horizontal axis, and everything else as vertical axis. As each data exists just once, setting median or last value or anything else did not even matter.
The approach however created a lot of additional logs, and bulk uploading 1 single line at a time thousands of times did not really make sense.
The second approach was then to wrap all objects into a single array, so that we upload 1 json object ({"data": [array of all previous objects]}
.
There however, the object structure is flattened, and the relation between timestamp and test value(s) is lost. We tried to create histograms to re-establish that relation, however no workflow seems to make sense.
Are we overlooking "the easy way" to do this?
Tl;dr: We seek the most "elasticsearchy" way to visualize several data fields over time. They are always grouped together with their respective timestamps. We tried approaches which are either unwieldy when uploading with cURL, or are impossible to visualize. Either way, we currently do a lot of preprocessing with jq and bash.