- Elasticsearch v7.17.1 running with elastic docker image on kubernetes
- 6 data nodes: 4 CPUs / 14 Go HEAP / 28 Go memory
- Indice: 3 primary shards (+ 1 replica) , 60 Go data
- nightly full index export, with Apache Spark using elastic for hadoop lib
- scroll size: 1200
- 6 tasks in //
All data nodes consume +10Go memory during the long search scroll phase, the heap seems fine, how can I debug what is in this memory please?
This is a problem for us because Spark is running on the same k8s cluster and consume also lot of memory.