Logstash on Kubernetes pipeline terminated if resources.limits.cpu/memory is set

When running Logstash (as a statefulset) on EKS with resources.limits.cpu/memory set, the pods tend to exit with the pipeline terminated message resulting in CrashLoopBackoffs. The input is a kafka stream. When I do a kubectl edit and delete the limits part of the manifest, it starts to work fine again. The request and the limits were matching at cpu: "1" and memory: 512Mi. Not sure what the limits field has to do with this. Any idea?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.