Node fails even after start/restart and it is not joining the cluster

Hi, I have tried to diagnose as much as I can, I have a three node cluster in that one of my node went down and it is not recovering from failed status, I have tried to stop/start it but it keep on hitting and maintaining failed status, I have tried to kill the process also but nothing changing, whatever I did the status not changing from failed.
I have checked the cluster metric in kibana and found cluster health is green and the cluster has only 2 nodes, cluster is not recognizing third node, so cluster health should be red or yellow if there is a problem in shards allocation, but here it is green and whatever the query I execute in kibana it giving me the result.
Any suggestions here! Thanks

Welcome to our community! :smiley:

What version are you on?
What does your config look like?
What do the Elasticsearch logs show?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.