KIbana Status is Yellow and I the pods become unavailable

Hello Everyone,

I am running the Elastic stack on Kubernetes. Recently I starting having timeouts on my kibana pods, getting alerts and in the kibana dashboard a few of the instances become yellow. I have 4 pods (instances) running and after 15-20 minutes they cleared by themselves. Kibana overall is still available, I just get the alerts and also in the Kibana instances section (Clusters> myesdb >Kibana) I get status on one or sometimes a few of the instances and the overall status changes to yellow. I added more RAM, added more pods (we used to have 2 now 4) in case that it was a timeout based on resources but it keeps happening on the daily bases. What does it mean in Kibana the status yellow? does anyone know?

Also I checked on the logs and this is the only error I have been able to gather from the kibana pods:

{"type":"log","@timestamp":"2022-08-20T03:11:54+00:00","tags":["error","plugins","alerting"],"pid":8,"message":"Executing Alert infrastructure:xpack.uptime.alerts.monitorStatus:d9cef256-a033-485a-802c-6b8676afa2ae has resulted in Error: params invalid: [shouldCheckStatus]: expected value of type [boolean] but got [undefined]"}

Not sure if that says anything but I do not know what else to do or where else to look. I have added to my yaml this much resources:

resources:
          requests:
            memory: 4Gi
            cpu: 200m
          limits:
            memory: 4Gi
            cpu: 2

has anyone have any pointers or have had any similar situation with Kibana? any help it would be greatly appreciated.

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.