Elastic Cluster Architecture Best Practices

Hi all,

I have an upcoming project to set up a small cluster and thought would use the community help to validate the design scenario that I have in mind.

A little background about available resources for that project:

4 Nodes, differ in terms of resources (e.g memory, disk spaces, and CPU).

I have 2 thoughts on how to purpose the nodes:

scenario 1:

Node	CPU	Disk	RAM	Role	
Node 1	16	   800GB	      70G 	         Logstash	
Node 2	16	3TB	               50	         master, data, Kibana	
Node 3	8	700GB	       50	         Data	
Node 4	4	500	               16	         Master, Dedicated	

scenario 2:

Node	CPU	Disk	RAM	Role	
Node 1	16	800GB	      70G 	         Logstash	
Node 2	16	3TB	               50	         master, data, Kibana	
Node 3	8	700GB	       50	         master, data	
Node 4	4	500	               16	         master, data

About log ingestion volume:
I'll have 2 logstash pipelines that will ingest nearly around 30 GB/MAX of data every day and I'm willing to set retention for 30 days then delete.

And for logstahs, Which is better: one config file for all pipelines or one per pipeline,
and which ES node(s) should logstah have the output set to?

I think that the main issue is that your nodes would be uneven.

Another question is, do you want resilience or not?

If you want resilience then you will need at least 3 master nodes and replicas, and this is where an uneven cluster can be a problem.

Elasticsearch balance the shards across the data nodes and try to keep an equal number of shards on each node, it also has some watermark protections depending on the disk usage.

Since you would have 3 nodes, each one with a different disk space, when your smaller node fills up the following scenarios would happen:

  • With 85% of disk usage on the smaller node, Elasticsearch will stop allocating replica shards on this node, but will still allocate primary shards. This can impact on the resilience of the cluster.
  • With 90% of disk usage on the smaller node, Elasticsearch will try to relocate shards away from this node and allocate them on other nodes on the cluster. This also can impact on the resilience of the cluster.
  • If the node keeps filling up and it reaches 95% of disk usage, Elasticsearch will enforce a read only on every index that have one or more shards allocate in this node. This is the worst case scenario as it can lead to data loss depending on how you are ingesting your data.

With an estimate volume of 30 GB/day and a planned retention of 30 days, you would need at least 900 GB of disk space and only one of your nodes have this space, also, to have resilience you need replicas, with 1 replica for each shard you will need 1.8 TB of disk space and again only one node in your cluster has this space.

The fact the the RAM is also uneven can give you some issues while querying data.

So with these requirements and resources you will probably not be able to have some real resilience which leads to the second option.

If you do not care about resilience and it is ok with having just one master (you need three for resilience, two makes no difference), then the planning is easier.

You could use the following configuration:

  • Node 1: Logstash + Kibana
  • Node 2: Elasticsearch with the role of data_warm, this would store older data and you would be able to have more than 30 days of retention.
  • Node 3: Elasticsearch with the role of data_hot, this would store newer data.
  • Node 4: Elasticsearch with the role of master.

For this to work you would also need an Index Lifecycle Policy to move your data from the data_hot node to the data_warm node after some specific time.

1 Like

Thank you for clarifying that. What if I assigned three of the nodes the "master+data" role? Won't this achieve high availability? I understand from your explanation that uneven disk space would still be a problem, but what if I found a way for the disk on the smaller node not to reach 85% usage?

Also, is there a way to enforce shard type on a specific node? Like for example node 4 to hold only replicas?

Yes, it will achieve high availability for your master nodes, with 3 elegible master nodes you can lose one of them and your cluster will still be up.

But the uneven disk space would still be a problem, when your smaller nodes start running out of space your cluster may not be able to have replicas for some indices for example and this means that if a node with a primary replica has some issue, that indice may be unavailable until the issue is fixed.

You can change the values for the watermark, for example you can increase the defaults to use more of the disk, but you will still have the watermarks, if you manage to never reach them you will have no problem, but this would limit the space used in all nodes by the space available in your smaller node, so your 3 TB node will use less them 500 GB of space, the rest will be unused.

Also, using 1 replica the 30 GB per day will need 60 GB of space in your cluster, your retention will be less them 7 days in this case.

And If you do not have a separated disk for the Elasticsearch data, the space used by the operating system and everything else in your system will also count towards the calculation of the disk used.

No, this is not possible.

1 Like

Thank you so much for making this clear. Now I got only one remaining question regarding logstash:

Assuming that I have 2 different logstash Pipelines:

What is the default behavior of logstash if it has more than one elasticsearch host configured for output? Does it load balance the requests between the different hosts or what? if so, is there any way to make this node specific meaning that to direct logs from each different pipeline to a specific Elasticsearch data node?

It will load balance the requests between the nodes configured in the hosts setting of the elasticsearch output.

Only if you configure just one node, but if this node is offline for some reason, it can lead to data loss.

Also, the node you configure in the output of Logstash is the node to where Logstash will send the index request, this does not mean that the data will be written to this node, Elasticsearch can choose to write the data on other node for example since it will balance the shards evenly between the nodes in the cluster.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.