Filebeat iowaits issue?

Hi @jsoriano
thanks for offering your help.
We normally just go into the google console and suspend the instance.
What we have found out doing a deeper look is the following:

When the cluster resumes the filebeat pods need somewhat around 300Mi of RAM. This is causing a OOM kill as the limit for the pods is set to 200Mi. As soon as the OOM kill happens the pods start to run crazy causing a very high iowait which makes the node almost unusable. Not sure what exactly is causing this. Maybe some memory leak or so. The only way to fix this is to run for example a kubectl rollout restart daemonset filebeat.

Does this make any sense to you?