Data Node runs out of memory (3.8 million IndexRequest)

The "Objects" column (2nd column) shows the total number of instances for that class/package, and the retained size shows the size of that class and all of its members.

There are thus 70,000 TransportReplicationAction$ReroutePhase , and their total size is ~15.6 GB.

There are also 22,000 bulk requests:

3.8 million index requests:

Here is an example document:

However, my thread_pool.bulk.queue_size is only 50 for that node.

Additional info:
46 ingest/hot nodes
46 Logstash instances
thread_pool.bulk.queue_size: 50
thread_pool.bulk.size: 17
batch size for ES publish is 200 documents per request.
Each Logstash has 72 threads available for publishing.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.