Convert 2 Node Cluster Into Single Node

Hello -
I'm currently using Elastic 7.11 (open/communitiy version) as the back-end storage for Graylog (also the open/community version). OS is Oracle Linux v8.

I have 2 Graylog servers, clustered, in front of 2 Elastic servers, clustered. All of this is sitting behind a load balancer.

I have a need to, effectively, break the clustering so that I only have one Graylog and one Elastic server. The Graylog part is easy enough. My concerns are around breaking the Elastic cluster and ensuring that I don't lose any of our log data in the process.

We don't really manage (for lack of a better term) Elastic - it's mostly just sitting there collecting/storing log data. So, my Elastic skills are pretty sparse. I've been trying to search for information about the process needed to make this change but, I'm coming up a bit short on helpful items (at least for someone with my level of comfort with Elasticsearch).

Anyway, does anyone have handy a good resource for changing a 2 node cluster into a single node? Whether it's something official from Elastic or something you've put together previously that you don't mind sharing.

Thanks for any help or pointers you can provide.


Welcome to our community! :smiley:

You can do this, but be aware if you lose the node you lose your data (unless you have a backup). The easiest way is to use an allocation filter to exclude allocation to one of the nodes, then once it is "empty" shut it down, and then disable replicas.

Thanks Mark. That's very helpful.
I do have some follow up questions if you don't mind.

Presumably, I'd run/create that allocation filter on the node that I'm planning to keep, right?
You mentioned waiting for the node to become 'empty' - is there a command I can use to verify that has occurred?
Then, once the second node is shutdown, is there any need to remove that filter? I'm guessing it probably doesn't matter at that point.

Lastly, in the elasticsearch.yml file on the remaining node, I would want to update the items
discovery.seed_hosts and cluster.initial_master_nodes to remove the excluded node - correct? What about - can/do I just leave that as is or comment it out?

Thanks again for your help.

It doesn't matter which node you make the request to.

_cat/allocation is easiest.

Replace the IP with "" and it'll remove it.


I wouldn't change that.

Thanks Mark.

That seemed to be working until the node I was planning to keep ran out of disk space so, I ended up removing the filter and letting it re-populate the other node.

To that point, maybe I’ve misunderstood how the Elastic cluster works. On each of my 2 nodes, I’ve got a little over 500gb of data. Does that mean I have a total of 1TB of Elastic data?

The assumption we had was we had a total of 500GB of Elastic data that was replicated across the 2 nodes. And when they were combined, we’d end up with roughly 500gb of data. Seems that may not be the case or, is there some process that hadn’t run to merge/dedupe the data because of the space issues?


If you have every index replicated, then your total store size will include both primary and replicas, yes.

Did you remove the replicas before setting the filter?

No, I didn’t remove any replicas (didn’t realize that was needed).

Would that be doing something like this:

In addition to relocating shards and eliminating replicas you probably need to manually remove one of the master eligible nodes from the voting configuration before removing it. If you just shut it down I suspect the cluster will go red and not function properly.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.