24:1 ratio


Simple question: is the recommended 24:1 disk:memory ratio using the entire machine's memory or just the memory allocated to Elastic's JVM?

E.g. I have a 64GB machine and 30GB is for Elastic (rest for Lucene), does my memory count as 64GB or 30GB?


When we use disk to RAM ratios we typically base this on the RAM the node has available, which includes the heap as well as the file system cache. This also usually assumes that the heap is assigned 50% of the RAM the nodes has available.

The ideal ratio can however vary a lot depending on use case. What is the use case? Where does this recommendation come from?

1 Like

Thank you!

My use case:
Using logstash as a log parser (static logs which I keep on disk and then manually parse).
200+ folders
2000+ files in the folders
Each file ranges from few hundred MB to few GB
Each line in file, when matching the grok regex, has typically only ~2-5 fields stored (each field max 10-40 chars)
Each folder occupies 1 index, so 200+ indexes
Total folder size ~300 GB currently, and increasing
64 GB RAM, 30GB to Elastic (default 1GB to logstash since disk util is at 100% already, so don't think increasing it helps)
Elastic on HDD, Logstash parsing done on SSD
Very few queries currently (even in the future will not be much), mainly indexing at this point (manually running logstash on each folder)
Disk usage on HDD is 0-1%, Disk usage on SSD is 100%
Logstash and Elastic on the same machine, although I'm planning to add another machine running Logstash as well (so 2 Logstash to 1 Elastic)

Sorry just rambling here, but maybe you can find a few holes in this config :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.