Design indexes with big data

I am creating an elastic cluster with 2.5TB dataset.

Questions:

  1. What is the best way to design shards for best performance?

  2. The only limit is 2.1Billion documents per shard (it's Lucene core limitation), 20 - 40GB is soft limit. Is it still the case with higher versions of Elastic?

  3. What would be the optimal configuration of hardware for this dataset? Is there any benchmarking provided by elastic search?

The optimal shard size depends a lot on your use case and latency requirements. Each query fans out across involved shards in aparallel, but each shard is processed in a single thread, which means minimum query latency tends to increase with shard size. Exactly how much depends on your data, hardware as well as how you query it. This is descibed in this old Elastic{ON} talk. Even though it is quicker to query a small shard than a big, you generally want to have as large shards as possible as splitting data into a lot of very small shards can give very bad performance as there will be a lot of tasks to be performed.

The recommendation is therefore to benchmark based on realistic data and query types and volumes to get the right settings for your use case and hardware. Indexing also competes for the same resources, so make sure you always benchmark using a realistic combined load.

Thanks for the your valuable comment. Can I find hardware benchmarking of elastic somewhere to get an insight about how to choose hardware with larger datasets?

@Christian_Dahlqvist

Let say I go for a shard size of 20GB (maximum). So, turns out 20*100 = 2 TB.

How many indices and nodes with CPU/RAM configuration would be ideal for 100 shards?

How many nodes with what CPU/RAM configuration can give us good search performance for load 10000 searches a day?

You will need to run benchmarks based on your data and queries. Have a look at the following resources:

https://www.elastic.co/webinars/using-rally-to-get-your-elasticsearch-cluster-size-right

https://www.elastic.co/elasticon/conf/2018/sf/the-seven-deadly-sins-of-elasticsearch-benchmarking

1 Like

@Christian_Dahlqvist, A good rule-of-thumb is to keep the number of shards per node below 20 per GB heap. A node with a 30GB heap should, therefore, have a maximum of 600 shards. This will generally help the cluster stay in good health.

What this means is if I spin 16C/64GB machine as one node and give 30 GB for heap, I can put maximum 600 shards.

In our case, we just need 100 shards.

Node 1: 16C/64GB: Master
Node 2: 16C/64GB Data node 1, 25 shards = 25 * 20 = 500GB
Node 3: 16C/64GB Data node 2, 25 shards = 25 * 20 = 500GB
Node 3: 16C/64GB Data node 3, 25 shards = 25 * 20 = 500GB
Node 4: 16C/64GB Data node 4, 25 shards = 25 * 20 = 500GB

So, technically 100 shards with a size of 20 GB each on 4 data nodes and 1 master node.

Question

  1. Is this design feasible? If yes how many indices would be optimum for these 100 shards with this workload?

You have not told us anything about your use case, which is what ultimately will drive the answers to your questions.

What is the ratio between indexing and querying for your use case?

What type of queries do you intend to run? Do these target parts of or the full data set? What are the latency requirements?

How many concurrent queries do you need to support?

Another important consideration is what type of storage you are using as storage performance often is the limiting factor.

@Christian_Dahlqvist, below are the details you needed regarding our usecase.

  1. In our usecase the ratio between indexing and quering is 30/70.
  2. We mostly run the queries on full dataset.
  3. 1000 concurent queries support.
  4. We are using SSD nvme type for the storage.

Are you running searches and/or aggregations? What are your latency requirements?

The blog post you linked to assumes a logging use case with limited querying, so may not apply as your use case is quite different.

Supporting 1000 concurrent queries that all hit the full dataset will put your cluster under serious load. I would recommend setting up 3 dedicated master nodes, which can be smaller than the data nodes. With respect to data nodes I would expect you to need a considerably larger cluster than you outlined as you either will need to increase the number of replicas used to handle the query throughput or hold quite limited amounts of data per node to ensure you benefit from caching.

1 Like

We mostly run searches, with 400ms to 700ms latency.

I recommend you look at the links I provided. I do not think anyone can give you the answers you are looking for, so you need to test and benchmark your use case.

@Christian_Dahlqvist, thanks for your help really appriciate it. Could you please help me with my another query.

I've an index (200GB) with 1 shard on a EC2 instance with 8vCore and 32GB RAM. The search query time with this configuration is 5 second. If I update the system configurations to 16vCore and 64GB RAM with same shard configuration.

Does Elasticsearch perform better?

Each shard included in a query is processed in a single thread so if you have only a single primary shard you will not be able to use the threads you already have available so increasing CPU should not improve performance of an individual query although you may be able to handle more concurrent queries.

Adding RAM might allow the OS to cache more efficiently which could improve performance.

As your data set is larger than The OS page cache it is likely you may be limited by disk I/O so that is the first are I would check.

I would recommend splitting the index into a larger number of primary shards and ensure you have the fastest storage possible.

1 Like

@Christian_Dahlqvist, Thanks for your super fast reply! Could you please tell me what is the impact of the number of documents in a shard vs storage size of document in a shard?

Let me simplify it.

  1. A shard with 20M document and size 200GB.
  2. A shard with 5M document and size 200GB.

How will elastic behave in this use-case?

I do not understand the question. I would expect the comparison to be between an index with 1 primary shard, 20M documents and 200GB in size and another with 4 primary shards 5M documents per shard and roughly the same total size.

The result of the comparison will depend on what is limiting performance. If it is CPU I would expect a significant improvement but if it is something else the difference may be small.

You will need to test/benchmark to find out.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.