Using ES + Kibana to centralize and compare test runs

Hello,
I'm experimenting with using ELK Stack in our company to centralize and compare results from test runs, and I'm not sure if ELK Stack is the right tool for us. I'll describe my vision and it would help me to hear if this is real or not in the ELK stack.
I want to import daily results from Snyk analysis, Greenbone penetration testing and other daily tests. In Kibana, I want to:

  • Easily compare the results of any test runs
  • Have an updatable list of ignored results (false positives...)
  • Get notified when there are new errors/vulnerabilities
  • Have a visualisation of results in each day
  • Have clickable links to external HTML reports

Everything preferably in a dashboard, so it is possible to work with the results without much technical knowledge.


So far, I have managed to do this (notes in Czech, but basically a chart with vulnerabilities each day and boxes with new vulnerabilities today), but I'm starting to worry that this is not the right tool for this job. That ELK stack is a tool for continuous data, and comparing results of separate runs will always be broken and not elegant.
Am I right? Are there any references for this use case? What approach/other tool should I use?
I know this is quite a general question, but thanks for help.

@Dzmitry/ @Tre_Seymour can this user please get some inputs please?

Thanks,
Bhavya

Hi @eronidrian ,

We use internally Elastic stack to ingest test results, monitor it overtime, send alerts with slack notifications in terms of failures and many more.

What you can do with your data depends on few things:

  • what data your test runner report
  • how you index the data

I will just give you a small example:

Assuming for each test run you have:

  • build id
  • test suite start/end time
  • test name
  • test start/end time
  • test status (pass/fail/skipped)
  • error logs

You can define 2 indexes:

Suite level

  "_source": {
    "@timestamp": "2024-04-10T08:50:28.151Z",
    "buildId": "f253d313-55ea-4388-8245-aacc72d1afba",
    "groupType": "Functional Tests",
    "startTime": "2024-04-10T06:46:27.795Z",
    "endTime": "2024-04-10T06:52:14.095Z",
    "file": "my/test/suite/path",
    "suiteName": "Feature X should work for user with Role Y"
    "result": "fail"

Test level

  "_source": {
    "@timestamp": "2024-04-10T08:50:28.151Z",
    "buildId": "f253d313-55ea-4388-8245-aacc72d1afba",
    "suiteName": "Feature X should work for user with Role Y"
    "testName": "Test 1",
    "startTime": "2024-04-10T06:46:27.795Z",
    "endTime": "2024-04-10T06:47:19.011Z",
    "result": "fail"
    "error": "Button Create should be disabled"

Then you can build dashboard to track suites over time and group failures by test name or error, just makes sure to ingest the data the way you can properly search for it.

Best regards, Dima

1 Like