Can we use more master nodes instead of 'coordinating only'

Hi,

We have an issue that is supposed to be resolved by adding 'coordinating only' nodes:

But on AWS (it might be a special case due to the fact that we are using internal company system built on top of AWS for security reasons) we can't have them it seems, but we can add more master and data nodes. Would addition of master nodes be as effective or only a bit less effective, would they take any roles of 'coordinating only' ones? What are other reasonable solutions to the problem?

No, you should not add master-eligible nodes to solve a problem related to a lack of coordinating nodes. Instead, you should add coordinating nodes.

These docs recommend against using master-eligible nodes for coordinating client traffic:

While master nodes can also behave as coordinating nodes and route search and indexing requests from clients to data nodes, it is better not to use dedicated master nodes for this purpose.

These docs give advice on how many master-eligible nodes to have in your cluster:

However, it is good practice to limit the number of master-eligible nodes in the cluster to three. Master nodes do not scale like other node types since the cluster always elects just one of them as the master of the cluster. If there are too many master-eligible nodes then master elections may take a longer time to complete. In larger clusters, we recommend you configure some of your nodes as dedicated master-eligible nodes and avoid sending any client requests to these dedicated nodes. Your cluster may become unstable if the master-eligible nodes are overwhelmed with unnecessary extra work that could be handled by one of the other nodes.

1 Like

I can't add 'coordinating only' nodes, so what else could I do?

There isn't really a substitute, sorry, you'll need to address whatever factors are blocking you from adding coordinating-only nodes. Hopefully the links to the docs that I shared are helpful in convincing your organisation that this is something that needs to be done.

1 Like

The blog post you linked to is related to bulk rejections. If that is the problem you are looking to solve, adding coordinating-only nodes is not the only and not necessarily the best solution. It would help if you described your use case and how you are sharding your data. Are you using time-based indices? How many indices and shards are you actively indexing into? How many indices can a single bulk request target?

I have just a single index to which I am writing to. There are 8 AWS m4.2xlarge nodes where the code is running in parallel and each node is supposed to write to AWS Elasticsearch that has 3 master nodes and 3 data nodes. There are 18 shards in this case. The size of the data file that I am writing in to ES is ~100Gb. I am not using time-based indices, not sure what it is. Would setting es.batch.size.entries to lower number help? Currently it is set to 5000.

How many concurrent threads/connections do you have indexing data into Elasticsearch?

Based on your description I am not sure adding coordinating-only nodes would help much.

As I told we are using 8 AWS m4.2xlarge nodes. There is no explicit set up of the number of cores per executor node. m4.2xlarge has 8 CPUs and 32Gb memory. The following parameters might be relevant:

--conf spark.default.parallelism=1
--conf spark.yarn.executor.memoryOverhead=2g
--conf spark.executor.memory=15g

So, due to memory set up I would expect there to be 2 cores running per executor, so total is 16 cores. I am not sure how to figure out precise number of threads, but here is what I know now.

We tried reducing number of AWS nodes that are doing the writes to ES by half, reducing from 8 to 4, but it didn't help. We have also increased the CPU and RAM of the ES nodes but it didn't help (currently we have 128GB RAM and 16 CPU threads per node).

What kind and size of storage do you have? Might storage be the bottleneck?

Currently what we see in Elasticsearch cluster health is that CPU and Memory goes to 100% . And then the error: EsHadoopNoNodesLeftException, all nodes failed error follows.

We use EBS 250Gb on each AWS node. Not sure, whether its storage or not, but if ES nodes utilization goes to 100% perhaps its because we are writing too fast too much into it? Currently we reduced es.batch.size.entries from 5000 to 1000 to see whether it helps, running right now.

If you are using gp2 EBS the IOPS available (750?) is proportional to the size and such small volumes may not be able to support very high indexing rates. If you have statistics available on iowait and disk I/O I would recommend looking at that. Otherwise I would probably recommend trying to switch to EBS with a higher amount of provisioned IOPS and see if this makes any difference. Another option might be to simply increase the size of your EBS volumes as that will give more IOPS.

We tried switching from gp2 to io1 and increasing EBS size on each of the worker nodes (master stayed the same - 100 Gb size) from 250 Gb to 900 Gb. We ran it with 8 EMR nodes and found it to work a bit worse than previously with gp2. With the latter it ran for 5h and generated index of size 1Tb until it failed, but with io1 it ran 3h failing with index of size 700Gb, so it failed a bit sooner. Right now we are trying to switch to i3.4xlarge quadruple EBS and see. We also noticed that when we use 4 EMR nodes (all other parameters being the same) it runs to 1/2 or even less of the progress that we get with 8 EMR nodes, so we are thinking of trying 16 EMR nodes, after i3.4xlarge, even though we actually have no idea why it is like that.

Just replied, not sure that notification was sent.

We tried i3.4xlarge but it failed with the same error: no nodes left exception. Then we tried to raise the number of EMR nodes from 8 to 16 and it gave us that bulk rejection error. So, we are stuck, not sure what to try next.

I believe the AWS Elasticsearch service are running a fork with their own plugins, which makes it hard to help. If you could spin up a similar cluster with the standard distribution on similar EC2 nodes it would be interesting if you still see the same issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.