A memory intensive query crashes an elasticsearch node

Hi.

There is 1 server 1 node 1 shard 1 index elasticsearch (v 2.3.2) configuration with very restricted hardware resources (ES HEAP - 1.5 GB).
The index stores about 100 million of docs.

elaticsearch.yml contains the following indices settings:

indices.fielddata.cache.size: 25%
indices.breaker.fielddata.limit: 40%
indices.breaker.request.limit: 30%
indices.breaker.total.limit: 60%

The crashing query is

{
  "aggs": {
    "filtered_documents": {
      "filter": {
        "bool": {
          "must": [
            {
              "query": {
                "query_string": {
                  "query": "*"
                }
              }
            }
          ]
        }
      },
      "aggs": {
        "tips": {
          "terms": {
            "field": <custom field>,
            "size": 1000,
            "collect_mode" : "breadth_first"
          },
          "aggs": {
            "last_record": {
              "top_hits": {
                "sort": [
                  {
                    <timestamp field>: {
                      "order": "desc"
                    }
                  }
                ],
                "size": 1,
                "_source": <id field>
              }
            }
          }
        }
      }
    }
  },
  "size": 0
}

If "filter" aggregation result contains tens of millions records and there are many millions of buckets after "terms" aggregation, then for some types of "custom field" i can see exceptions in elasticsearch log relating to Circuit Breaker work.. and it is ok. But sometimes there are "No heap memory" errors there that, seems, are not from CB. In such cases elasticsearch service occasionally stops to respond on any query.

Is there a way to avoid this situation under such conditions (hardware, logical structure)? I mean some memory consumption optimization of the query, query reorganization, configuration changes etc.
Or do I have to limit data selection?

Thank you in advance.

1 Like