I am doing some comparison loadtests with esrally.
I tested indexing 250 mio events with elasticlogs-1bn-load challenge of eventdata loadtest from github.
The big (!) green areas are the index-append-1000_elasticlogs_q_write operations.
First area on the left is running elasticsearch as native linux service, middle is running elasticsearch in docker with storing data in as in a docker volume. These bothe have 24GB heap configured.
Last testrun was configured to use docker volume, but only 4GB heap are defined.
I am using 1 primary shard and 0 replicas in all three testcases.
Can anyone explain, why 4GB is faster on indexing than 24GB? The chart shows the throughput.