I/O Scheduler type


I read from the documentation about how changing I/O Scheduler may significantly boost the performance of my cluster.

When I checked /sys/block/nvme0n1/queue/scheduler , it was set to along with other options mq-deadline and kyber.

Does anyone have any ideas what would be the most ideal io scheduler algorithm between two? or should I just leave it as ?

I'm running ES 6.6.2, running on Centos 7.6. My kernel version is 3.10.

Thank you

Hello Martin,
did you make any progress here?
I'm also about to do some tests in the near future on this.

As documented by Elastic, one of the major gains ought to be to match i/o scheduler with the kind of storage you have, i.e. spinning or SSD disks due to the very different queueing algorithms, to that's what I'm going to focus on.

I'm on RedHat 7 server.


I didn't have enough info to make change so im just using default setting.
But I'm still interested how changing i/o scheduler type would impact the performance.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.