Best practices: CPU core count vs. no. active shards per node

I've been reading Amazon's ES best practices guide here. Crunching the numbers, they are suggesting 2.56 cores per active shard.

This reference recommends a 1:1 ratio.

What does Elastic say about this?

If I have say 10 shards on a node with only 2 cores, is this going to cause a major issue with thread contention?

It depends a lot on the use case. Is it search heavy or indexing heavy? How large portion of the data is typically queried? What are the query latency requirements? How many concurrent queries need to be supported?

I can't yet give exact answers beyond it's search heavy and the query results will be a tiny percentage of the data.

What type of data? How are you querying it? Are you using routing, nested documents and/or patent-child relationships.

Flat documents, no nesting or parent-children. Data is arrays on integers (no text). Some fields are queried for equality, some queried for range. Routing is based on a custom number in each document.

Most queries would query against 4 - 6 of the fields.

@Christian_Dahlqvist, in your blog where you state a max of 600 shards per node, what machine specs are you working on?

That blog post assumes an index-heavy logging use case with time-based indices, so probably does not apply to your use case.

What are the query latency requirements? How many concurrent queries need to be supported? Are you using time-based indices?

@Christian_Dahlqvist, I'd like to keep latency to < 500 ms. Searches could scale to 1k/sec. I'm more concerned with my initial design. 1 node, 2 replicas, approx. 100M documents, say 50 searches per second.

I have 4 indices. I'm considering giving them 4 nodes each, which would allow be to go up to 4 nodes before needing to reindex. 8 or 16 would be better for scalability.

Your hosting price calculator doesn't give the number of vCPUs @ 15 GB RAM, but I'm assuming it's 2. That would be 2 CPUs for 16 shards, initially.

Do you think this would work?

If you are using routing for all searches, e.g. by customer number, it often makes sense to increase the number of shards per index as that results in less data being serached per query (as only one shard per index is searched).

Search latency typically depends on the number of shards searched, the data volume searched, shard size, the mappings and naturally type of query. Once you have a cluster that contains all your data and can serve queries within the agreed SLA you can increase the query throughput by adding nodes and replicas. Lets assume you have a relatively small data set and load your 4 indices with e.g. 5 primary shards and 1 replica onto a 3-node cluster. During benchmarking you gradually increase the number of concurrent queries until the SLA is no longer met. If we assume you managed to server X concurrent queries using this cluster and that you need to support a 3X queries per second, you can scale out to 9 nodes and simply set the number of replicas to 5 (6 copies of each shard instead of 2 for a cluster that is three times as big. This type of sizing process is described in this Elastic{ON} talk. Does that make sense?

@Christian_Dahlqvist I fully understand what you are saying and had designed my cluster as you describe. I only "hit a wall" when I saw the AWS ES guides stating 1.5 to 2.56 cores per shard. From what you're saying, The system will likely be fine with say 8 shards per core.

There is, unfortunately, nothing I can find in the documentation that gives a guide on this. However, given the configuration options on your hosting, systems start with likely only 2 vCPUs, so I am assuming all will be fine with around 20 shards per node.

I reckon the only way to really know is to load the data in various shard configurations and see which one is the fastest :slight_smile:

As this depends a lot on the use-case there is no way to know for sure apart from testing and benchmarking. Any set of general guidelines will make assumptions about the use-case and access patterns that may not apply to you. This is especially true for search use-cases that can be very non-linear in nature. For common usecases like log analytics where characteristics vary less between usecases it is generally easier to provide general guidelines that have a good chance of working.

If you are using routing, each query will hit between 1 and 4 shards. Each shard requires one core for processing, so you can basically only process two shard-queries in parallel per host with 2 CPU cores. Whether this will allow you to meet your SLA and query throughput targets probably depends on how much of your data can be cached in memory as disk I/O often is a limiting factor and small EBS volumes can be quite limited.

@Christian_Dahlqvist I fully understand. For my reference (since I will also be doing logging), how many cores & how much RAM was on the 600-shard-per-node machine you reference in your blog post?

The blog post gives shard count as function of heap size, so 600 shards corresponds to 30 GB heap. As 50% of RAM should be assigned to heap it means RAM was between 60 and 64 GB. The number of cores does not affect this. Note that this is a maximum and not necessarily a recommended level.

For a properly tuned system I would expect to be able to hold a lot of data with far fewer shards than that. In this webinar we uploaded close to 20TB on a cold node and as the average shard size was around 50GB that only used about 400 shards per node.

@Christian_Dahlqvist I'm deducing from your replies that you don't particularly worry about the core count. I am starting to think ASW's ES post might have some bias on it.

I assume that machine has upwards of 32 cores?

If your load is CPU bound, the core count will have an impact on performance. How much data and shards a node can hold is generally limited by heap. A typical AWS node used for cold storage, where shard count generally is the highest, typically have 8 cores and 60-64 GB heap. Unless your data set is small enough to be cached by the OS, disk I/O is quite likely to limit performance.

OK, so for logging, that's about 75 shards per core (600:8). Thanks much!

1 Like

The number of CPU cores relates to the expected workload, not necessarily the number of shards.

In that case, I could just as easily increase the total shards from 20 to 60. The workload would still be the same.

I still get the point of the AWS ES blog, though. If enough queries came in that caused a hit on 60 shards and the server has 2 cores, that's 30 heavy threads per core.

The treadpool serving queries are fixed so threads are not spawned per query. Instead requests are queued up.

Ahhh... THAT is the datum I needed. In that case, AWS's post doesn't make much sense. I could have any number of shards so long as the RAM is sufficient (1 GB x 2 per 20 shards). The core count is related to workload, not shards. Thank you.

2 Likes