2 nodes with different Cluster UUID issue causing issue in Monitoring

hi,

We had a 1 node cluster. So we wanted to temporarily expand it to 2 node cluster. I know that minimum 3 nodes are needed for full sized cluster. But keeping that aside, I have a different issue. We are running 5.5.1 with xpack basic license used for monitoring.

During the install process, somehow a new UUID was generated on the 2nd node for the same cluster with same name. Now in Monitoring, its treating this as 2 separate clusters and was giving me error that with basic license only 1 cluster can be monitored.

To fix it, I uninstalled ES from 2nd node. Updated the 1st node to remove the 2nd host name from unicast host. Restarted ES and Kibana.

But in Kibana, its still showing 2 clusters with same name. The cluster is pointing to the new node (in monitoring). Its also showing the node as green, even though I have uninstalled Elasticsearch from that node. So not sure how to get rid of that 2nd cluster from monitoring?

Please let me know what steps should I take to correct this.

Thanks!

I'm unsure if there is another method but the easy way would be to DELETE the .monitoring* indices may be.

A stupid question, maybe, did you remember to stop the node before uninstalling Elasticsearch?

In my experience, a node is only shown as green in Kibana when it's actually running, when it stops it turns gray. So if it's still shown as green in your Kibana it is probably still running. If you login to the server where you started elasticsearch, you could try to run this command:

ps -ef | grep elasticsearch

If you find a java process then Elasticsearch is still running and you must terminate it with a kill signal.

That is what I ended up doing as it was a dev environment. After sometime, the proper node started appearing in the monitoring list. So the original issue is now resolved.

May be a dumb question. So when expanding existing cluster, is it always needed to apply the unicast host entries first on the existing nodes, before bringing up the new nodes up? I think that the original problem may have been created due to this wrong sequence as I installed and brought the 2nd node up first with the 2 host names in the unicast host list. Since it was marked as master eligible, I think that is what would have triggered generation of new UUID for the cluster.

Subsequently, I first updated the existing node config - restarted it, then installed ES on second node and was able to get the 2 node cluster up. For now, i have set the min master to two to avoid the split brain issue. So just want to confirm that this is the expected sequence of steps.

We are on windows. So when we uninstall ES, it will stop the windows service and also remove the service entry. I had verified it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.