What determines the size of SSD cache to install?

I am running ES on a Synology NAS VM.

My data and indexes are also stored on the same NAS (same volume).

Bays in the NAS may be allocated to filesystem cache.
"Elasticsearch heavily relies on the filesystem cache in order to make search fast"

Is there a way to calculate an optimum size for the SSDs or is there a maximum size beyond which the value decreases or do I install the largest SSDs I can afford?

FS cache is a concept managed by the OS, and it's usually done in memory - hence the 50% guideline.

The Synology approach is not directly analogous here, but you could install an SSD and then run Elasticsearch from it entirely.

thank you for your advice.
I'm not familiar with a 50% guideline.
50% of what?

50% of total memory - https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

So, if I put elasticsearch on an SSD volume, would overall performance be better than creating SSD cache or would adding I/O SSD cache speed up the system even more?

I didn't understand your 'heap' suggestion, but, if I am searching multiple indexes, totalling around 2Tb, would the 50% be 1Tb (which would obviously be impractical, unaffordable . . .)?

Or, as I have 48Gb of RAM, do you mean 28Gb SSD cache?

If Elasticsearch runs on an SSD then you will get better performance for sure.

Memory being the system RAM available to processes running on said system. Heap being the amount of memory you allocate to the JVM that Elasticsearch starts.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.