I'm trying to understand why I'm having high resident heap usage for a small index on an idle node. Here are is some information on my current state.
4 node es cluster (all set to master true/data true)
Node in question is not the master.
Heap used/commited = 5.9/7.9 GB
OS Memory = 16GB
Index size for primary shard on node: 205MB
Other shards on the node: Only about 10 MB
Index information from HQ:
Index documents: 2.1 million
Max documents: 2.4 million
Delete Documents: 287K
Merge Total: 89
Merge Total Docs: 35 million
Merge Total Size: 2.9GB
The way I've built up my index was issue a lot indexes in a very short amount of time. I built it inefficient manner (doc per bulk request) with upsert enabled. I'm routing and using the same routing key for my inserts. The inserts have a mix of standard insert/updates and some partial updates /upserts on the same documents. I hit originally a OOM issue during this time but after recovering and restarting the node mem usage stays at around 70% of heap used. Originally all my nodes were around 2-30% heap used but now they are stuck in the higher area for what I'm considering a small index.
I know I probably need to do a heapdump, but I'm trying to find some reasoning for the current state my node is in now.