Elasticsearch indexing_pressure.memory.limit

When I try to index large log files through bulk_index, I notice got logs missing. Suspecting it could be due to the memory issue of the elasticsearch nodes. I tried to change the indexing_pressure.memory.limit to 40%. But the issue still persist. FYI, the server has 10GB RAM and has two elasticsearch nodes (docker containers). Both of them with mem_limit set to 6GB. Any advices on how to fix logs missing issues?

Welcome to the forum !

Why? Why not just one elasticsearch node, without any memory overcommitted ?

1 Like

Welcome!

How large the bulk requests are? May be consider to reduce the bulk size?
And yes as @RainTown noticed it's useless to run 2 instances in the same machine unless you are testing something specific...

I’m a bit old school. Docker, containers, virtualization, k8s, all that modern wizardry - lovely stuff. But at the end of the day, a pint pot is still a pint pot. You can wrap it in all the YAML you want, it’s not magically turning into a bucket.

Is this “normal” now? “The server has 10GB RAM and has two Elasticsearch nodes … each with mem_limit set to 6GB.”

My hunch is the “server” may actually be a VM, because clearly what this setup needs is another layer or two. Also, it only dawned on me later that these two instances might be in different Elasticsearch clusters? :grinning_face: Or that there might be a 2GB swap partition on the host.

1 Like

Actually I was considering to have multiple ES nodes for load balancing, or ensuring high availability through failover and backup strategies.