Elasticsearch data nodes - pagefile/hard faults vs. heap usage

I have an Elasticsearch cluster version 7.8.1running on Windows server 2019. There are 3 3 data nodes, 3 master nodes, and 1 coordinating node, in the cluster. That data nodes each have 48gb ram, and 24gb is dedicated to the jvm.

I've noticed with large queries in Kibana, that the hard faults (read from virtual memory, aka disk), spike massively on my data nodes, but the used heap size in the JVM doesn't get much past 60%.
Here's a snip of one of my nodes hard faults vs. heap usage:

hard faults:

Heap Usage:

Thoughts on why elasticsearch isn't using more of the JVM heap? The typical proposed solution is "throw more ram at it", but since it's not using what it has, I'm hesitant

This is no longer supported and has reached EOL, please upgrade.

if Elasticsearch is constantly accessing files from disk, the OS will transparently cache this for you which will also reduce these "hard faults". So the question would be, how often are you querying the data?

Thanks for the info. I'm wondering if the OS transparent disk caching any different on windows/NTFS than *nix? Is there any more documentation on this?

There probably is, but TBH I don't have any real knowledge of the differences and my focus is primarily linux sorry.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.