Is there any way to determine what ES doing caused abnormal IO?

I have a ES cluster with 3 master and 5 data node enabled rebalance and allocation. And in /_nodes/.../stats doesn't show any thing running in that nodes, and also no tasks in _tasks.
But all data node has near 500 IOPS of write, and one of them have above 2000 IOPS of write.

So are there any way can I find how these IO happens?


Like I have same 4 data nodes in a cluster, and they have evenly 5k QPS, but one of them have near 2k write iops with other three have only 500 write iops. These nodes are hosted in completely same docker environment in different physicial server.

1 Like

Welcome to our community! :smiley:

It could be merges happening automatically. Is it causing issues?

But I can't see any running merges in stats, and it makes search query more slower on this node.

GET /_nodes/hot_threads

Seems nothing on that node.

::: {*}{CdzrK2KETQWMAfimJvKCoQ}{T5M3t5JPQPSOz0ybok7yWA}{*}{*}
   Hot threads at 2022-03-28T03:13:10.030Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

At this time, this node still have 1.2k write iops.

What does iostat or something similar show?


Shows only this java process is causing write IO.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.