ELK 7.16.2 hardware requirements - suggestions?

Hi Guys, I am using the ELK stack in version 7.16.2 and I would like to ask for some suggestions about the hardware configuration.

My Elastic will receive a log of 5 MB every day and I would like to have a retention of 3 years. So:

  • 5 MB / day
  • 150 MB / month (5mb * 30 days)
  • 1800 MB / year (150mb * 12 months)
  • 5400 MB in 3 years (1800MB x 3 years), so in total 5,4 GB.

I need an help about the hardware requirements and reading the elastic blog I found this:

For memory-intensive search workloads, more RAM (less storage) can improve performance. Use high-performance SSDs drives and a RAM-to-storage ratio for users of 1:16 (or even 1:8). For example, if you use a ratio of 1:16, a cluster with 4GB of RAM will get 64GB of storage allocated to it.

For logging workloads, more storage space can be more cost effective. Use a RAM-to-storage ratio of 1:48 to 1:96, as data sizes are typically much larger compared to the RAM needed for logging. A cost effective solution might be to step down from SSDs to spinning media, such as high-performance server disks. For example, if you use a ratio of 1:96, a cluster with 4GB of RAM will get 384 of storage allocated to it.

So in my configuration I should configure memory-intensive search workloads, right? So if I configure a server with 1 node (if elk uses the 3 nodes only for maximum reliability I don't need it), 10 GB HD and 1 GB di RAM should be fine?

Thanks in advance!

1GB Ram is not enough. RAM is so cheap now a days. I will not go below 16gig. because you need ram for ingestion/visulization and search

Hi @Ely_96

Are you going to do this in Elastic Cloud? If so what provider

Or self managed?

Also is there going to be a lot of searches etc?

5MB is very very small ingest volumes.

If you really really do not need high availability

I would do minimum 1 x 8-16GB Hosts with min 2-4vCPUs and 10B SSD.

This will depend REALLY on how you manage your shards, and index max 20 Shards per 1 GB RAM JVM Heap Space if you do not manage your shards well you might even need more RAM.

Hi, thanks a lot for your answers :slight_smile:

I could use Elastic Cloud on AWS or the self managed version on my own host but the requirements are differents? Or in both of cases I need more than 8GB RAM?


Your data volumes are very low, so you will not need a very powerful cluster/node. 8GB of RAM IMHO seems very high. I would recommend starting with 4GB RAM with a 2GB heap, but you may be fine with half of that. As you have a long retention period I would recommend you use monthly indices in order to keep the shard count down.

@Ely_96 as you can see there are a number of variables to think about.

@Christian_Dahlqvist is right if you are very good / careful about managing your shards you could probably use a very small node (many folks are not :slight_smile: ) ... if you are not and have many shards indices you will need more memory.

If you have indices that are monthly you will have less indices and shards and can use a smaller memory footprint.. the tradeoff is when you clean them up you will lose a whole months worth or data ... perhaps that is OK ... perhaps it is not. Perhaps you need weekly ... or daily....

Think about that... In your case the actual volume of data is not the driving factor it will be the number of shards.

The nice thing about deploying on Elastic Cloud you could start small and scale up if needed.

1 Like

It is also worth noting that there has been a number of improvement in Elasticsearch 8.x that reduces memory usage and per-shard overhead, so I would recommend upgrading to the latest version.


thanks a lot for all of your suggestions :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.