Finding Heap Memory Circuit Breaker hard to predict

Hi folks,

I'm running into a lot of heap circuitbreakers based on fielddata size. I've been reading through advice on how to improve performance, but I still don't have a good mental model to predict when I'm going to have trouble.

Some numbers: I've got ~220M docs in elasticsearch right now. I'm adding about 14M a day. They're primarily apache access logs. The node servers have a limit of ~10GB on the heap for the fielddata.

I add a field called 'site' when I create the doc; it's hard-coded to the site that created the log entry. There are a very limited number of unique values (<50, let's say).

Here's what the _mapping looks like:

            "type" : "string",
            "norms" : {
              "enabled" : false
            },
            "fields" : {
              "raw" : {
                "type" : "string",
                "index" : "not_analyzed",
                "ignore_above" : 256
              }
            }
          }

So here's the thing: If I use Kibana 4 to visualize the top 5 sites using "terms" for site.raw in the last 4 hours, I get a circuitbreaker on site.raw field data. (I get one warning per shard for the more recent shards)

If I set the time window to a four hour period ten days ago, I get the warning that the more recent shards have failed, but the index handling this old data works fine and I get sensible results.

The only change I've made to the indices is that I went down to 1-hour indices a few days ago to see if that helped, and very recently I made @timestamp have doc_values:true. All the 1-hour index shards are failing, despite having 1/24th the documents as the big daily indices from earlier.

(As an aside, the doc_values on @timestamp has permitted me to do basic time sorting again, but should I really be using doc_values on all my non-analyzed fields?)

So what determines when the circuitbreaker is going to fire? It's not docs in the index. It's not number of distinct values. It's not total docs in the system. So... race condition? misleading error message? I'm sort of stumped.

Any insight appreciated! I'd love to be able to put 1B docs in here, but performance has been steadily degrading since I hit 100M or so.

Jeff

Yes, or just upgrade to 2.X and it does it for you.

It's a % of heap - Circuit breaker settings | Elasticsearch Guide [8.11] | Elastic

Thanks for your reply.

I should have mentioned in the initial post that this is an AWS Elasticsearch service cluster- so I don't have direct control of the servers or config files. It's running 1.5.2 right now, to the best of my knowledge.

So I either need to make do with 1.5.2; throw AWS ES under the bus; or start asking when they're upgrading: I can't just go to 2.0.

I understand that the CB fires based on % of heap - I feel confident that I've read all the easy-to-find material published about this. The particular problem I'm having is that the model which is presented is "If the estimated query size is larger than the limit, the circuit breaker is tripped and the query will be aborted and return an exception." and in a different doc, "If the resulting fielddata size would exceed the specified size, other values would be evicted in order to make space."

But this isn't the behaviour I'm seeing, so I'm asking what the practical behaviour is.

Let's say I have 60 indices (i1..i60) with 5 shards each. They're time-boxed, eg: logs-2015-09-01, etc. The recent indices (say i40..i60) are small, with ~1M docs each. The older ones are bigger, with ~20M docs each. The behaviour I'm seeing is this:

Query 1: Date histogram of the last 4 hours. This will only return docs in i56..i60.
Result: Exceptions on all shards of i48..i60; no docs returned as a consequence.

Query 2: Identical date histogram, except time range is moved back 10 days. This will only return docs in i30.
Result: Exceptions on all shards of i48..i60; all expected docs returned as a consequence. (since none resided in the problem shards)

What clearly seems to be happening is that the shards of the first 47 indices have plenty of room to do their search, but are leaving cruft behind in the JVM heap's fielddata allocation. When i48 arrives, it compares estimated new fielddata versus remaining fielddata heap size and says "No". All subsequent shards fail, too. And then garbage collection happens, or the estimate of heap used is fixed, and we're ready for the next query to fail. But this isn't the model that's presented.

In practical terms, I can put all my non-analyzed fields into doc_values - putting the searches on disk. But that just kicks the can down the road - I'm using elasticsearch partly because it can full-text search. I can't do that with this heap problem: the second I step away from number / time/ raw, I'm cooked.

So is there anything I can do to work around this? Is this a misconfiguration of ES, or of my indices? Should I put a wrapper on my queries that guesses which indices apply and only search those?

In short, is there any way can I put 1B apache logs into ES 1.5.2 and still have useful search capability without simply throwing ~128GB of heap per node at the problem?

Thanks,
Jeff

"In short, is there any way can I put 1B apache logs into ES 1.5.2 and still have useful search capability without simply throwing ~128GB of heap per node at the problem?"
Well with fielddata it seems difficult, using docvalues is the key here as most of the memory is allocated outside the heap and the values are compressed efficiently.

Time to update my templates and re-map my old indices. Wish me luck!

Jeff

Unfortunately you are at the mercy of AWS. There are heaps of improvements in 2.1 (the latest version) that would benefit here. It may be that the costs you may be saving by not managing your own cluster aren't worth it in this case.

Doc values! :slight_smile:

Yeah, I'm aware of the costs of going with the AWS packaged solution vs. hosting my own.

Bearing in mind this is my first experience with E, L and K, I figured it was worth it to reduce some of the complexity.

Given that this service is only ~2 months old, I expect they're having growing pains, and probably adding new things quickly. I wanted to collect some sound input from folks that I could go to them with. My experience is that, despite being a huge organization, they're very reasonable about feedback. Certainly their RDS offering allows many versions of MySQL, for eg. Hoping to see the same with ES.