Data too large, data for [<agg [1]>] would be larger than limit of [311387750/296.9mb]

I am using a data engine to send data about web traffic to my ELK stack (version 5.2). When I try to visualize the data in my dashboard I sometimes get the following error:

Error Visualize: [request] Data too large, data for [agg [1]] would be larger than limit of [311387750/296.9mb]

Error: Request to Elasticsearch failed: {"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[request] Data too large, data for [<agg [1]>] would be larger than limit of [311387750/296.9mb]","bytes_wanted":311392312,"bytes_limit":311387750}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"logstash-2017.06.22","node":"iRl6kRKrTSGWqxFYI9i0rQ","reason":{"type":"circuit_breaking_exception","reason":"[request] Data too large, data for [<agg [1]>] would be larger than limit of [311387750/296.9mb]","bytes_wanted":311392312,"bytes_limit":311387750}}],"caused_by":{"type":"circuit_breaking_exception","reason":"[request] Data too large, data for [<agg [1]>] would be larger than limit of [311387750/296.9mb]","bytes_wanted":311392312,"bytes_limit":311387750}},"status":503}
    at http://192.168.1.91:5601/bundles/kibana.bundle.js?v=14723:27:18931
    at Function.Promise.try (http://192.168.1.91:5601/bundles/commons.bundle.js?v=14723:75:22354)
    at http://192.168.1.91:5601/bundles/commons.bundle.js?v=14723:75:21724
    at Array.map (native)
    at Function.Promise.map (http://192.168.1.91:5601/bundles/commons.bundle.js?v=14723:75:21679)
    at callResponseHandlers (http://192.168.1.91:5601/bundles/kibana.bundle.js?v=14723:27:18543)
    at http://192.168.1.91:5601/bundles/kibana.bundle.js?v=14723:27:7044
    at processQueue (http://192.168.1.91:5601/bundles/commons.bundle.js?v=14723:38:23621)
    at http://192.168.1.91:5601/bundles/commons.bundle.js?v=14723:38:23888
    at Scope.$eval (http://192.168.1.91:5601/bundles/commons.bundle.js?v=14723:39:4619)

This error also happens with [agg [6]] and [agg [9]].

I think this error is caused by a data table I am using in my dashboard. Here is the elasticsearch query I am using in the data table:

{
  "query": {
    "bool": {
      "must": [
        {
          "query_string": {
            "analyze_wildcard": true,
            "query": "*"
          }
        },
        {
          "query_string": {
            "analyze_wildcard": true,
            "query": "*"
          }
        },
        {
          "range": {
            "timestamp": {
              "gte": 1498154367415,
              "lte": 1498155267415,
              "format": "epoch_millis"
            }
          }
        }
      ],
      "must_not": []
    }
  },
  "size": 0,
  "_source": {
    "excludes": []
  },
  "aggs": {
    "2": {
      "date_histogram": {
        "field": "timestamp",
        "interval": "30s",
        "time_zone": "America/Los_Angeles",
        "min_doc_count": 1
      },
      "aggs": {
        "3": {
          "terms": {
            "field": "appid_name.keyword",
            "size": 5,
            "order": {
              "1": "desc"
            }
          },
          "aggs": {
            "1": {
              "sum": {
                "script": {
                  "inline": "doc['fwdbytes'].value+doc['bwdbytes'].value",
                  "lang": "painless"
                }
              }
            },
            "5": {
              "terms": {
                "field": "srcip",
                "size": 5,
                "order": {
                  "1": "desc"
                }
              },
              "aggs": {
                "1": {
                  "sum": {
                    "script": {
                      "inline": "doc['fwdbytes'].value+doc['bwdbytes'].value",
                      "lang": "painless"
                    }
                  }
                },
                "6": {
                  "terms": {
                    "field": "dstip",
                    "size": 5,
                    "order": {
                      "1": "desc"
                    }
                  },
                  "aggs": {
                    "1": {
                      "sum": {
                        "script": {
                          "inline": "doc['fwdbytes'].value+doc['bwdbytes'].value",
                          "lang": "painless"
                        }
                      }
                    },
                    "7": {
                      "terms": {
                        "field": "srcport",
                        "size": 5,
                        "order": {
                          "1": "desc"
                        }
                      },
                      "aggs": {
                        "1": {
                          "sum": {
                            "script": {
                              "inline": "doc['fwdbytes'].value+doc['bwdbytes'].value",
                              "lang": "painless"
                            }
                          }
                        },
                        "8": {
                          "terms": {
                            "field": "dstport",
                            "size": 5,
                            "order": {
                              "1": "desc"
                            }
                          },
                          "aggs": {
                            "1": {
                              "sum": {
                                "script": {
                                  "inline": "doc['fwdbytes'].value+doc['bwdbytes'].value",
                                  "lang": "painless"
                                }
                              }
                            },
                            "9": {
                              "terms": {
                                "field": "proto",
                                "size": 5,
                                "order": {
                                  "1": "desc"
                                }
                              },
                              "aggs": {
                                "1": {
                                  "sum": {
                                    "script": {
                                      "inline": "doc['fwdbytes'].value+doc['bwdbytes'].value",
                                      "lang": "painless"
                                    }
                                  }
                                }
                              }
                            }
                          }
                        }
                      }
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

I also sometimes get this warning: "Courier Fetch: 4 of 5 shards failed." (sometimes 2 or 3 shards instead) Are these two related? How can I fix this?

Elasticsearch has a built-in mechanism to abort a request if it needs to much memory (the circuit breaker you are seing above in the exception message). This breaker kicks in, and your request gets aborted.

With all the scripts in your aggregation, that could be a culprit or the amount of buckets that need to be created while the request gets processed. You could try to reduce your aggregations (or you could also try to index your data with the number of bytes already summed up, which would be much faster).

--Alex

Hi Alex,

Thanks for the response. I don't think I can reduce the number of buckets in my aggregations, so I will try to index the totbytes field. Can you please explain why indexing the field inside the documents will prevent the error?

Thanks,
xwang1

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.