Sustained 50%+ increase in heap after mapping update

TL;DR

After adding mappings for 217 fields (72 of which were nested), heap increased ~50%, and much of it went to the fixed_bit_set memory. Is there a way to reduce this heap increase while maintaining the mapping update?

Update:
We have taken a number of measures to support our feature and mitigate the memory increase:

  1. Reindexed all of our indexes less than 100gb in size (still working on the large ones) to prior mappings, and set better shard counts (size in gb / 30). This reduced our shard count from 4400 to 1500. Heap was drastically reduced, and this got our cluster stable.
  2. Upgraded minor version to 5.6 - this didn't seem to have a significant impact.
  3. Added 8 additional nodes to increase memory of the cluster
  4. Reduce additional mappings needed for the feature to 78 additional fields (instead of 217), 25 of which are nested (instead of 72)

When we rolled out these new, reduced mappings for the feature to all indexes less than 100gb, the fixed bit set memory only increased ~100mb per node, and there was no noticeable increase in heap in the cluster. We were expecting to see a larger increase in memory and fixed bit set because the original problem mappings caused a much larger increase.


Additional details:

We have 17 data nodes, each with:

  • 64 GB Ram, 31.9 GB heap
  • 1.6 TB SSD, ~800gb - 1tb data each
  • 6 core 3.5 ghz cpu

Cluster stats (at the time of the incident):

  • ES version 5.4.3
  • ~550 indices, all with the same mappings and settings
  • Indexes range in size from <1 MB to 1 TB, we try to keep shards under 50gb
  • 4400 shards
  • ~16.5 TB data
  • ~55 billion docs

Mapping information:

  • Prior to the migration, we had 569 total fields, 83 of which were nested - the majority of fields are under a single layer of nesting
  • The migration increased the total number of fields to 786, of which 155 total were nested

After the heap spike, we inspected node stats and found the majority of memory appears to be taken by the fixed bit set. This ranges from ~18gb to 21gb+ on our nodes. Its hard to find information on it, but it appears to be related to nested documents. We have a suspicion that the increase in nested docs is mostly responsible for the heap spike.

We would love to support this mapping update if feasible. Are there settings or configurations that we can tweak to reduce the heap associated with these changes? Are there other things we can do to investigate and fix the issue?

I'm happy to provide additional info about the cluster and current settings.

1 Like

You're correct, that memory is being used due to the increase in number of nested fields. Here's some info about the bitset (from an old ticket, but the principle hasn't changed much since then):

The nested feature heavily leans on in-memory bitsets. Essentially each parent nested field needs an in-memory bitset. In the case all the nested fields are in the root level, only one bitset needs to be in memory, but once you have multiple levels of nested fields more bitsets need to be loaded in memory and this can become expensive.

Unfortunately, there's not much we can do about that memory overhead... it's just how nested docs work. To support the relational-style features that nested docs provide, we have to keep an in-memory join table (represented by the bitset) so we know which docs belong to which parents.

The biggest thing you can do is reduce the number nested fields (as you did), and reduce the hierarchy of nesting if possible, since multi-level nested docs become increasingly more expensive.

There's an open issue in Lucene to move the BlockJoinQuery (which is what nested uses the bitset for) over to doc values, which would drastically reduce the heap usage. But it's still a work-in-progress. Here's the issue: [LUCENE-7304] Doc values based block join implementation - ASF JIRA

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.