Visualization for Automation Test Reporting

Hello,

My team and I are very new to Kibana and elasticsearch, but we started using them for error reporting as well as automated test reporting.

What I'm having trouble with is getting a chart that accurately represents the last known status for our test cases.

I have data that is being sent in from the automation software, which get serialized into strings (despite whatever data-type they start as). The important values are the names of the scenarios (Scenario A, Scenario B, Scenario C, etc.) and their last reported status (pass, fail, error, and so on). I only want each scenario to be represented once in my chart and I want them to be sorted into buckets by their last statuses.

The closest I've gotten is something like this:


This is supposed to represent 80 scenarios, but the data is counting 81 because of one scenario which errored the first time it was run and then failed (correctly) the second time (and every time after).

How do I get this graph to only recognize the last record for each scenario?

Current Configuration:

  • Metric:
    • Aggregation = Unique Count
    • Field = scenario name "keyword"
  • Bucket:
    • Aggregation = Terms
    • Field = scenario status "keyword"
    • Order By = metric: scenario name
    • Order = Descend
    • Size = 4 (number of possible statuses)

EDIT:
Looks like this post is trying to do exactly the same thing.

EDIT 2:
I think what I want is to use the "Top Hit" aggregation in the metric, but regardless of which field I target, I don't get any options in the "Aggregate with" drop-down, which is a required field. This seems like a bug to me but if there are any other suggestions for this, that would be great as well.

Hi,

If you want something like this (one for each scenario):

Then use the following configuration:

Metric:
    Aggregation = Count
Bucket:
    Split Chart
        Aggregation = Terms
        Field = scenario name "keyword"
        Order By = metric: scenario name
        Size = 80 (number of scenarios)
    Split Slices
        Aggregation = Terms
        Field = scenario status"keyword"
        Order By = metric: scenario status
        Size = 4 (number of status)

Best Regards

Hi @CristianoFerreira,

Unfortunately this is not what I want...
I want one chart where each scenario is represented one time, and there could be any number of scenarios, but I only care about the last time each scenario was run.

Maybe another example might help?
If Scenario A was run 3 times and errors the first time, fails the second time, and passes the third time, the most up-to-date graph would only count Scenario A once as "passed". I should not have any idea that Scenario A ever errored or failed since that information does not matter anymore.

Hi,

Ok so if the documents are ingested from time to time (imagine from 5 to 5 minutes ) then you could use a Filter Aggregation and use the time from now-5min. Other way is to add a time filter to the Dashboard itself similar to this.

If the documents are ingested in a not regular way, then i would suggest you to build two index, one to show in kibana (that would have only one document per Scenario) and another to store the data.

When ingesting one document, this document would be added to the "store the data" index and at the same time would update the document in the "kibana to show" index.

With this each scenario would only have one document with the most recent event in it, in the "kibana to show" index.

Best Regards

@CristianoFerreira - How would I go about overriding the documents? As far as I'm aware, any data that comes into an index just gets added to the group.

Hi,

Every document has a document_id field (_id). A document can be overwritten if you put data on a document and specify the document_id.

Take a look at the API: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html

Best Regards

@CristianoFerreira

Would you happen to know how to do this through serilog? I looked through the documentation you provided and didn't see a way to set up the _id field dynamically based on the document content.

Or even better, if you have an example of how to set up a mapping through serilog, that might even be preferable. We create new indexes daily to help us maintain data for if we need to truncate it, so being able to set up these options dynamically is also important.

Hi,

I dont't know about serilog, although as the documentation says, to set the _id field you can set it like in the example: PUT twitter/_doc/123. Here the _id is 123.

If you use Logstash you can also access the document_id filed as you can find here: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-document_id

Best Regards

Thanks @CristianoFerreira, but I will probably have to wait for another solution since that would mean re-working code that already does most of what we want.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.