Why Might a Data Table Visualization Limit Itself to Only Seven Rows?

Hi Kibana Sages,

I have what is probably a difficult forum question… but I’ve gotta ask all the same. I have an ELK pipeline for monitoring network traffic. My Kibana (v 7.4.0) is used to display data from my Elasticsearch (also v 7.4.0). Specifically, I have a Kibana Data Table visualization, that is supposed to display four statistics:

  • Metrics: Sum( AdjBytes )
  • Buckets: HostA, HostB, App
  • And in the Data Table Editing screen, I’ve set Options --> Per Page = 10

Ex:

HostA          HostB          App     AdjBytes
10.10.10.10    20.20.20.20    HTTP    1000
10.10.10.11    20.20.20.20    HTTP    2000
10.10.10.12    20.20.20.21    FTP     5000

Pretty simple. ES and Kibana are working just fine, the data table is displaying accurate information, and everything looks great…

…except I’ve noticed that the Kibana data table displays at most seven rows of data.

I’m in development, and I can send as much or as little custom traffic as I wish. I’ve noticed the following:

  • If I send six or less kinds of traffic, I get that exact number of rows in my visualization
  • If I send seven kinds of traffic, I get exactly seven rows in my visualization
  • If I send eight or more kinds of traffic, I get exactly seven rows in my visualization (the lowest-numbered “AdjBytes” are dropped from the Visualization.)

I first thought this must be a Kibana problem. But when I did an Inspect on a Kibana request and an ES response, it looks like Kibana is saying, “Hey Elasticsearch, send me every value of HostA/HostB/App/AdjBytes you have from Time X to Time Y.” And the ES response only has seven row’s worth of data.

The full Kibana request is below, FWIW. I don’t know how to read it, but I don’t see anything that suggests “limit to only 7 rows.” And, like I said, ES sends back data for only seven rows, every time. I’ve hand-verified this on four separate tests.

So I’ve got to ask… Is there something I’m missing? Does the Data Table throttle to seven rows as default? If so, how do I compensate?

FULL DISCLOSURE: I’ve also posted a version of this question in the Elasticsearch forum, here.

{
  "aggs": {
    "2": {
      "terms": {
        "field": "HostA",
        "order": {
          "1": "desc"
        },
        "size": 5
      },
      "aggs": {
        "1": {
          "sum": {
            "field": "AdjBytes"
          }
        },
        "3": {
          "terms": {
            "field": "HostB",
            "order": {
              "1": "desc"
            },
            "size": 5
          },
          "aggs": {
            "1": {
              "sum": {
                "field": "AdjBytes"
              }
            },
            "4": {
              "terms": {
                "field": "Application.keyword",
                "order": {
                  "1": "desc"
                },
                "size": 5
              },
              "aggs": {
                "1": {
                  "sum": {
                    "field": "AdjBytes"
                  }
                }
              }
            }
          }
        }
      }
    }
  },
  "size": 0,
  "_source": {
    "excludes": []
  },
  "stored_fields": [
    "*"
  ],
  "script_fields": {
    "PDH_Sum_Flow": {
      "script": {
        "source": "doc['Sample.SamplingRate'].value * doc['Packet.L3.TotalLen'].value",
        "lang": "painless"
      }
    }
  },
  "docvalue_fields": [
    {
      "field": "@timestamp",
      "format": "date_time"
    }
  ],
  "query": {
    "bool": {
      "must": [],
      "filter": [
        {
          "match_all": {}
        },
        {
          "match_all": {}
        },
        {
          "range": {
            "@timestamp": {
              "format": "strict_date_optional_time",
              "gte": "2019-11-21T19:41:30.773Z",
              "lte": "2019-11-21T19:44:30.773Z"
            }
          }
        }
      ],
      "should": [],
      "must_not": []
    }
  }
}

Hey @redapplesonly, when you specify a "split rows" bucket, each of them has a size parameter. Based on the ES query which is being executed, this is limited to 5:

58%20PM

Hi Brandon,

Yes! You nailed it exactly! I'm not sure how you knew, but I don't care. Expanding the Size Parameter of the buckets fixed my problem exactly. Thank you!!!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.