Hello,
I'm migrating a cluster from an old one, and for some reason, after reindexing by best practices (than I know), performance are slower in the new cluster.
the first has 500GB- 5 shards and 1 replica
the new cluster is using ILM with rollover each 90 days/44gb. Each rolled over index has 1 shard with a replica and replica.
I was expecting the new index will be faster since 5 shards where skipped, but it's the opposite
The new nodes got 58gb, 1.8 tb, and 50% is free
attached a profile results for the following example:
What is the specification of the old cluster compared to the new? Are they running on the same type of hardware? Which versions of Elasticsearch are you using? Can you show the output of the cat indices API for the queried indices?
I moved from aws managed service (6 nodes r5.2xlarge.elasticsearch no dedicated master with 7.4) to Elastic cloud on aws with 3 hot nodes (58 gb), 3 warm nodes, coordinating nodes, dedicated masters) version 7.12.
Entire data is 5 tb.
Everything seems to work slower.. any query on any index.
I've tried restoring an index from aws, without changing it, and ran the same query on both, aws always wins by far. (It can be 1 sec vs 8 sec for example)
Note also that you are comparing 7.4 with 7.12, with a different architecture (specifically warm nodes). So a lot of things can happen.
To compare things, you need to check first that you are only hitting indices which are located on the hot nodes. Calling the warm nodes might be slower IMHO.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.