Cluster information
- ES Version: 5.1
- Nodes: 3
- Indices: 3
- Total Size: 100GB
- Total Docs: 55 million
- Shards: 5 P and 1 R (for each indices). Total: 40 shards
- Service: AWS ES (SaaS)
Background : Our system has a peak point in the morning when we receive high amount of searches, read and write requests which can last up to 10 hours.
UseCase : I wanted to force a garbage collection before the peak time. For this, I am updating the cluster settings of the parent circuit breaker
indices.breaker.total.limit: "55%"
Reference
Problem: But even after the JVM grew from 53% to 56% as you can see in the node stats below, JVM was not garbage collected.
GET /_nodes/stats/jvm
"timestamp": 1554375659658,
"uptime_in_millis": 10960191290,
"mem": {
"heap_used_in_bytes": 601959488,
"heap_used_percent": 56,
"heap_committed_in_bytes": 1065025536,
"heap_max_in_bytes": 1065025536,
"non_heap_used_in_bytes": 243082744,
"non_heap_committed_in_bytes": 251510784,
"pools": {
"young": {
"used_in_bytes": 11959328,
"max_in_bytes": 69795840,
"peak_used_in_bytes": 69795840,
"peak_max_in_bytes": 69795840
},
"survivor": {
"used_in_bytes": 1260248,
"max_in_bytes": 8716288,
"peak_used_in_bytes": 8716288,
"peak_max_in_bytes": 8716288
},
"old": {
"used_in_bytes": 588739912,
"max_in_bytes": 986513408,
"peak_used_in_bytes": 739896688,
"peak_max_in_bytes": 986513408
}
}
}
I don't know if this is a problem at the end of AWS ElasticSearch service or native ES.
If there is any other way to force a garbage collection I could try that, but I was not able to find any other way to trigger garbage collection.