Elasticsearch going down in cordinator node because of JVM heap space issue

H @Armin_Braun,
We checked the tasks api on the cordinator node ip and got to know there are some bulk write even though we have not configured this ip(cordinator node) for sniffing for bulk ingestion.

We see there direct bulk write requests and rerouted write request. Could you please explain why there is bulk write requests and rerouted write request hitting this node when this node is configured only as cordinator node with no shards in it.

PS: We don't have a load balancer concept implemented. We have configured all the data nodes only for sniffing to get an ES connection and that connection is used to ingest with bulk api

PFB a snippet of tasks api output. The entire tasks api has many similar task as mentioned in the below snippet

GET _tasks?nodes=ip-10-xx-xx-xxx.ec2.internal&human&actions=indices:*&detailed

"gl4L2YENT8GE1kLbD9EF5Q:861742" : {
"node" : "gl4L2YENT8GE1kLbD9EF5Q",
"id" : 861742,
"type" : "transport",
"action" : "indices:data/write/bulk",
"description" : "requests[1000], indices[tgs_xxxxxx_xxxx_c2-5-000086]",
"start_time" : "2019-12-26T05:57:57.185Z",
"start_time_in_millis" : 1577339877185,
"running_time" : "2.1s",
"running_time_in_nanos" : 2145269865,
"cancellable" : false,
"parent_task_id" : "gl4L2YENT8GE1kLbD9EF5Q:566962",
"headers" : { }
},
"gl4L2YENT8GE1kLbD9EF5Q:861737" : {
"node" : "gl4L2YENT8GE1kLbD9EF5Q",
"id" : 861737,
"type" : "transport",
"action" : "indices:data/write/bulk[s]",
"status" : {
"phase" : "rerouted"
},
"description" : "requests[333], index[tgs_xxxxxx_xxxx_c2-5-000092]",
"start_time" : "2019-12-26T05:57:57.120Z",
"start_time_in_millis" : 1577339877120,
"running_time" : "2.2s",
"running_time_in_nanos" : 2209866093,
"cancellable" : false,
"parent_task_id" : "gl4L2YENT8GE1kLbD9EF5Q:861736",
"headers" : { }
},