A 500GB gp2 volume only gets 1500 IOPS if I remember correctly. Given that these are the most I/O intensive nodes it seems a bit low. I would recommend using i3 instances with ephemeral storage for the hot tier if feasible.
So storages throughput really matters? I learned from webinar that data nodes need extremely high thoughput. For your information Christian, we are using elastic cloud kubernetes, so it is not possible to use i3 instance in Amazon EKS (managed kubernetes). My previous experienced was attached EBS IOPS SSD(io1) to data hot and elastic running seamlessly backthen. FYI at the integration to our microservices is still around 65% at the time. But the bussiness owner asked me to use GP SSD instead of IOPS SSD becuase the price is so costly.So you suggested that low IOPS in data nodes(I/O intensive) that is serious bottleneck that might lead to in_flight_request too large & circuit_breaking_exception in production environment?
As the IOPS of gp2 EBS volumes is proportional to the size you can increase IOPS by increasing the size of the volumes. It can be cheaper than PIOPS even if you do not end up actually using all that storage.
I see I can reduce data hot nodes to 100gb per each nodes using IOPS SSD, but will it solved the issues of circuit_breaking_exception? because I have to rebuild the elasticsearch to do that, restore the backup snapshot from S3. Btw, thank you for the insight Christian.
changing the heap memory to 29gb is solved the issue. Our elastic no longer received circuit_breaking_exception for two days.
hi @warkolm, I havent solved my issues for this topic Fail to send data to elastic unexpected EOF , I just solved the issue of circuit breaking exception in this topic but not the issue of failed sending data to elastic. I think hat was entirely different topics
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.