DISK I/O Monitor alert from new relic for Elastic search nodes

i am getting disk i/o monitor alert from new relic for the elastic search nodes(2 nodes).,
how can i resolve this issue.,
help me out of this.,new to elastic search

Elasticsearch can often be limited by disk I/O when under load so without further details about the cluster and use case I am not sure this alert indicates a problem. Can you please provide some additional details?

Please find the details of my cluster below::

one master and two nodes(gp's volumes) with 400gb each ,iops -1200 per node

Why is this a problem? What is the load on the cluster? If you are seeing high I/O utilisation on the nodes you may need faster storage/ more iops to support your use case.

1 Like

but from nodes side in 400 gb only 20 % of space is free and 80% space is occupied .,,and load average per minute is 0.60 only., but why this alert is coming .,,may be .,is this because of ebs volumes types .,now we attached general purpose volumes with 1200 iops.,

if we upgrade EBS volume gp to provisioned iops volumes and iops .,,

is that a correct solution ., and for this downtime requires? or not?

Load avg is low, and disk io alert happens, which really means that u need to upgrade to a fast storage!

1 Like

so that means ,.i need to upgrade type of volume gp to provisioned iops.,

pls correct me if i am wrong.,

and thank you for the response @DeeeFOX

ur welcome.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.