Elasticsearch Cluster Performance Tuning Help required

Hi All,
I have 3 nodes elastic cluster each node assigned with 1Tb hard disk, 15GB Ram and 4 CPU
Elastic version using is 6.3.0.
At the moment I have 3,304,624,313 documents using 2.1 TB disk space
This data is collected within a month.

Problems is
Doing a search on cluster takes over 5 minutes.

  1. In order to optimize search performance, what can I do?

  2. what is the maximum data size 3 node cluster can handle?

  3. Is it ok to split the indices vertically so small fields are grouped in one index? will it help improving performance

For answering 1) & 2), need answer to how many indexes & shards your cluster contain? What is the total heap memory allocated for ES?

I am not sure whether I understand your question 3). Can you explain that with an example?

What is the output of the cluster health API?

Thank you Christian for quick response.
Please find below cluster health status
image

Thanks Junaid for quick response

please find below requested information

Regarding my 3 rd questiom, think my document has field1 and field2, where field2 is a long text. So is it ok to split the index into two indices index1 will contain filed1 only and index2 will contain field2 only. will it help improving my search performance as i will do the search on less number of fields?

That gives an average shard size of just over 2GB, which is a bit on the small side.

Have you looked at monitoring to see what is limiting performance? Is it CPU or perhaps slow storage resulting in significant iowait?

Hi Christian,
I'm using google cloud basic hard disk, hope it can do this job well :slight_smile:

When a search is made all cpu reaches almost 400%
I have allocated 8GB out of 15GB ram for the elastic but it doesn't pass 60%

Regarding shard size, is it worth increasing the shard size (~40GB) by doing a reindexing?

Elasticsearch is generally very I/O intensive, so having fast storage is very important. Run iostat -x to see how the storage is performing. I would not be surprised to see a lot of iowait indicating that this is the bottleneck. If that is confirmed I would recommend to upgrading ton more performant storage.

Please find the iostat -x output

That doesn't look too bad assuming it was taken while a query was running. Then you may be limited by CPU, so may need to scale out or up the cluster.

@Christian_Dahlqvist Thanks for the support. I will do a test after scaling.

Finally, I am planning have 3 times more day in the future as 3 months retention is required (the day we are looking at is one month)
if i am to stick to the same hardware spec will the following make any performance improvement?

  1. splitting the index and putting filed1 2 in one index and filed 3 and 4 for i another index. out search queries are mostly based on a single filed which has a json payoad.
  2. increasing the shard size to larger value and reducing the number of shards handling as for the moment i got 888 shards

Btw hope this is you :slight_smile:

It is indeed.

In order to optimize search performance, what can I do?

You've got a ~2T dataset and ~50G RAM. This means: lots of I/O (dataset does not fit in RAM). Two options for increasing performance without other changes:

  • More RAM (= more data in memory and / or file system caches). RAM is way way faster than anything else. So everything coming from RAM is a big plus.
  • Faster disks (e.g. SSD, SSD in RAID). If the total amount of RAM is < 2 TB, significant disk I/O is needed. Spinning disks = ~125MB/s, single SATA SSD = ~500MB/s, SSD RAID sets of PCIe SSD = way way faster. This way everything NOT coming from RAM can still load sort of fast.

Mapping (change requires re-indexing):

  • According to other posts (I do not know the reason): use a max shard size of ~50GB.
  • With rule above in mind: keep as close to 1 shard per CPU core as you can (1 shard = 1 process).
  • Rough estimate in your case: ~2T / 0.05 = optimal is ~40 shards (if the dataset will not grow).
  • Since you've got CPU 12 cores this is not the most efficient setup. So more cores will help as well.

Thanks, @jeroen1 for the detailed answer. I will update my setup as per your instructions.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.