Increased Read IOPS usage after upgrade from 8.19.3 to 9.1.3

Hi all, since upgrading from 8.19.3 to 9.1.3 i'm seeing a massive increase of read IOPS on the Hot node disks.
I couldn't see anything obvious in the release notes that would lead to this outside of the new JDK version and new Lucene version.
Here's a screenshot of it:

This is now throwing NodeMemoryMajorPagesFaults from Prometheus. I have checked that I am not doing any force merges on the Hot nodes (I ran GET _tasks) and I am running 64GB machines with 31GB of heap (I check the VM flags and I have confirmed that it is running with “UseCompressedOops”).

I also noticed that, the warm nodes have increased the average IO/s from 80 to 120 IO/s but the cold nodes have not.

Has anyone noticed similar behaviour? Nothing changed from our side except the version

I am not sure if related, but I think in 9.x they enabled logsdb automatically, it’s mentioned that with logsdb you save storage, but there’s a performance overhead

I executed:

GET /*/_settings?filter_path=**.settings.index.mode

I saw that no index is using the mode “LogsDB”

We had similar thread before, which sadly did not reach a conclusion:

I made some comments in that thread, which may or not have been helpful there, and same applies here.