I have a scenario where I'm taking my server from a single ES instance to multiple ES instances (on the same box) for better use of large memory (if you've ever sat through a [gc] for a 96gb heap, you know this pain). One thing that happens is that the cluster balances the indices between the two instances. These instances (on the same server) are the only nodes in the cluster.
Depending on the amount of data, this can take a long time. This is on RAID (magnetic disk) with high throughput, writes can get up to 1.2GBs. By tweaking the
indices.recovery.max_bytes_per_sec option, this process can be sped up by around 25% -- and monitoring in iostat shows the writes getting larger, up to 450MB/s from 200MB/s. Reads, however, consistently stay under 100MB/s. This seems like reading into a buffer and then flushing it out. If so, what buffer is this -- and are there any knobs to adjust it?
Why are the reads so low in contrast to the writes? Is there anything that can be done to increase the read rate?
Quick server info:
- 20 cores @ > 2ghz
- 128 GB RAM
- Magnetic RAID-5 - several TB
Thanks for any helpful thoughts you can offer.