Randomly out of memory exception when bulk importing data

Hi, we use the Javascript bulk helper API to bulk import data to ES.
Randomly the import process crashes with out of memory exception (OOME). Of course we check for errors from the bulk import but there aren't any.
I'm familar with backpressure issues with nodejs stream processing but I can't see any backpressure controling mechanicm here for the ES bulk import, although the input comes from a readable stream.

I have tried to analyze heap dumps taken shortly before the OOME, but so far I couldn't get a clue of it. I'm not too familar with nodejs heapdump analyze. however.

Interesting fact too is that I never get these OOMEs from my dev console, but only from deployments. They both address the same ES cluster. The dev environment has similar memory limits as the deployment.

Any ideas, what I can check?

I have to add that I'm comparing behaviour using the same test data set. If it's a feature branch deployment, there are no other actions running in parallel. I have no insight about the cluster status because this is out of my Dev scope

What is the size of your bulk requests? What is the specification and configuration of your cluster? Which version of Elasticsearch are you using? Is there possibly a difference in the number of concurrent bulk requests between the two scenarios?

What is the size of your bulk requests?

All what I can answer is: We bulk import files with pretty much the same code as the example code from here.

The records of the import files contain say 5 to 20 fields containing strings between say 5 and 50 characters.

Which version of Elasticsearch are you using?

Kibana "GET /" returns 8.11.3

What is the specification and configuration of your cluster?

As I wrote I have no insight beyond my dev scope. Is this a question specific enough to beeing forward to our ops?

Is there possibly a difference in the number of concurrent bulk requests between the two scenarios?

No there isn't