Elasticsearch and hugespages


(Danielmotaleite) #1

Hi

Usually java apps that use lot of ram likes having hugepages setup in the OS and then enabled in the jvm.

Does this help elasticsearch or is not recommended? anyone have done any benchmark?

Thanks!
Daniel


(Jörg Prante) #2

For noticing positive effects of huge pages, you will need a machine with very large RAM, e.g. 128 GB or more. Huge pages try to take away some pressure on the page scans, if free memory gets low and the table size grows too large.

Check if your Linux has transparent huge pages (THP) enabled:

cat /proc/vmstat | grep ^thp_

If you see number in split and collapse, you don't have to care much. Linux THP sees the "anonymous memory pages" (Java uses such pages for the heap) and is doing already the work for you, and tries to manage pages automatically as if they were huge pages.

The expected performance gain of huge pages in general is at maximum 10 percent but not for a workload like ES. For ES with 50% RAM reserved to heap, I expect much smaller benefits. Why? Because even with huge pages enabled (which automatically locks pages like mlockall does), ES allocates all heap memory pages statically at startup, and that does not change over the lifetime of the process.

Here you can find an example of a 1% performance gain on Cassandra, another JVM application with similar memory pattern like ES:

https://tobert.github.io/tldr/cassandra-java-huge-pages.html


(Jörg Prante) #3

One mistake I must correct.

THP ist only available to the JVM if the flag -XX:UseTransparentHugePages is given at JVM startup. It is false by default. So nothing happens to Elasticsearch unless explicitly enabled.


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.