Elastic Cloud Console Showing Unhealthy Deployment

I currently am seeing in my cloud ELK stack (2 hot, 2 warm, 1 Integration, 1 Kibana, 1 Master instance) showing that one of the warm instances and the master instance are both unhealthy. Both are under 30% JVM memory pressure and under 10% Disk allocation. Along with this, when I go into Kibana it is reading as healthy and logging as expected. I'm aware the hot nodes are still going so that is fine but outside of restarting the whole deployment, are there other steps I can take to get these 2 rogue unhealthy nodes back to a healthy state? I tried to do an edit deployment and save to see if the instances rescanned would correct this as well but it did not. The shards also all look like they’re clearing through with no issues.

Hi @daniel.talbot Have you opened a support ticket? If not you should.

Stephen, yes I also sent in an email I just was trying to be proactive in posting here too.

What do these commands show in Kibana -> Dev Tools

GET _cat/health?v

GET _cluster/health

GET _cat/nodes?v

Stephen,

Here are three screenshots:


image
image

Cluster status is Green...
What is exactly is showing unhealthy?

Yeah that is where I am confused. It is showing as unhealthy in the elastic cloud console

Did you actually x out the message it could be old / from the past those messages sometimes hang around... or Shift Refresh the Page...

When you go to the Elastic Cloud Overview page does it show healthy or not?

Do each node show healthy?

Tried to hit the x, unclickable. Refreshed the page and no change. Master and one warm node still show "Unhealthy". Restarting the deployment now that it's after hours for us.

Issue ended up being with the Allocator our instances were running on. Support remapped us to a new allocator and is looking into it further. Thank you @stephenb for the responses though.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.