Why is indexing throughput higher at jmx 4GB than with jmx 24GB?

Hi,

I am doing some comparison loadtests with esrally.
I tested indexing 250 mio events with elasticlogs-1bn-load challenge of eventdata loadtest from github.

The big (!) green areas are the index-append-1000_elasticlogs_q_write operations.
First area on the left is running elasticsearch as native linux service, middle is running elasticsearch in docker with storing data in as in a docker volume. These bothe have 24GB heap configured.

Last testrun was configured to use docker volume, but only 4GB heap are defined.
I am using 1 primary shard and 0 replicas in all three testcases.

Can anyone explain, why 4GB is faster on indexing than 24GB? The chart shows the throughput.

Thanks, Andreas

It is recommended to run with as small heap as possible (as long as you do not run into problems). The difference does seem larger than expected though. Were there anything else running on the host or using the storage that could differ between the runs?

the load generator is running as vm on esx infrastructure. There it is possible that sth else is struggling on the disk.

But the elasticsearch instance I am testing is running on bare metal with local HDD. Nothing else is running on this server.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.