it causing lots of IO wait and affect searches.
most of it because of high iops,
in my system is limited untill 1000 iops.
most of the time is almost reach 1000 iops,
is it normal for elastic to reach 1000 iops ?
my index size is about 72G, 36m docs.
with config 5 shards and 1 replica.
5 nodes, 3 master and data, and 2 data only.
mostly happen to data only nodes.
is there any config that I can use ?
maybe throttle the merge or any suggestion.
Concurrent indexing and searching can easily saturate 1000 IOPs. Why not use instance-attached storage instead?
You can reduce merging by increasing the indexing buffer (default 10% of heap), and use only as many client-side indexing threads, and as many shards, as necessary to achieve your required indexing throughput. For example, in the extreme of 1 shard on the node and 1 client-side indexing thread, ES/Lucene would only ever write a single segment to disk the size of your indexing buffer, greatly reducing the later merge load.
You can also force ES to use only a single thread for merging. This may cause merges to fall behind in your use case, eventually forcing ES to throttle incoming indexing, but it should reduce the worst case IOPs.
That said, searching is potentially much more IOPS heavy than merging since it requires random access to e.g. the terms dictionary, stored fields. Merging is actually "best case" for the way AWS counts IOPS (see AWS IOPs) because it is a sequential read of all files being merged, and a sequential write of the files for the merged segment.
But still, that's a 2X cost (reading then writing). For SSD backed EBS it's 256 KB max single IOP size, 1000 IOPS gets you ~128 MB/sec writing throughput, which is in fact not that much for fastish CPUs.
Also, make sure your OS has plenty of free ram for IO caching. Set your JVM's heap to the smallest size that is needed, and leave the rest of the RAM to the OS.
I tried with c3.xlarge, and compare with c4.xlarge.
in terms of load average, c3.xlarge with instance attached storage is much higher than c4.xlarge.
I will try again and monitor the IO.
also, is there any factor that will cause IO spike?
sometimes when it spike and I check on /_stats/merge,
it actually not merging.
but number of iops is high and only happen on few nodes.
how to scale in elastic with IOPS issue?
add more nodes and shards ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.