Scaling Elastic on Azure

I recently deployed an Elastic cluster to Azure using the template in the marketplace, but we need to add a couple more data nodes to the cluster.

Is there an easy way to do this? I had attempted it on a previous cluster but with a client node by deploying a VM to the resource group, installing Elastic, and copying the .yml config which didn't work out for me.

Thanks in advance!

I am recently trying to explore this approach as well. Did you update the elasticsearch.yml file in each of the nodes to include the newly added node?

When you start the elasticsearch service inside the newly added Node, are you seeing an exception in the log?

I did not update the other nodes, no. Our current setup is 5 data nodes, 3 dedicated masters, kibana, and 3 client nodes. I checked the yml file on the master and it has no references to the data nodes, just the masters.

I just got done setting up a new one, I used the same version that the other data nodes are using and I copied the yml file from one of the existing data nodes just modifying the node name to reflect the name of the new server.

After starting it up, I got some errors in the cluster log. First, it failed to send the ping because the handshake failed with one of the master nodes. Said it was missing the authentication token.

I did some internet searching and decided it was because I didn't have x-pack installed on the new node, so I installed the appropriate version to match what was there already and tried starting it again. This time, the service started up fine and stayed running but the cluster log is saying that the monitoring execution failed, the exception was when closing export bulk.

That's where I'm at right now.

The Azure Resource Manager (ARM) template that the Elasticsearch Azure Marketplace offering uses deploys in incremental deployment mode, meaning

Resource Manager leaves unchanged resources that exist in the resource group but are not specified in the template.


If the resource already exists in the resource group and its settings are unchanged, the operation results in no change. If you change the settings for a resource, the resource is provisioned with those new settings.

It's possible to scale a cluster up by adding more data or client nodes, by deploying the template to the same resource group with the exact same parameters as were used on initial deployment, changing only the parameters for the number of data nodes or client nodes.

Some words of caution:

  1. Suggest using this only if you have deployed dedicated master nodes. Whilst it should also work with master-eligible nodes, there is more that can potentially go wrong in this scenario.
  2. Recommend trying it first on a staging environment
  3. Be sure to snapshot your data before trying in production
  4. Recommend having some data redundancy with replicas
  5. I would not recommend this as a robust scaling solution long term, and would use it only for scaling up.

I just deployed a cluster with dedicated master nodes and one data node, then ran a deployment with the same parameters, but changing the number of data nodes to 2. The cluster successfully deployed and scaled. I did need to start the Kibana service though by ssh'ing into the Kibana VM and running

sudo service kibana start

Then waiting about a minute for it to come back up.

I ended up getting it figured out yesterday after trying it a few more times. The last error I was getting was because it was missing a plugin that they use. I pretty much just deployed a VM to the existing resource group, installed 5.6.5 elastic, 5.6.5 x-pack, and then the other plugin that was needed.

Thanks for that information though forloop! I appreciate your input and help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.