Using huge NVMe disks with elasticsearch

Hello everyone.

We're currently in process of choosing new hardware for our elasticsearch cluster.

Our current cluster consists of 32 nodes and holds 42TB of data across 2000 indices.

We're choosing hardware from the specs our provider has.
One of options we're considering is the one that has 256GB RAM and 4x2TB NVMe SSD.
We're planning to join those in RAID0 which would get us 8TB NVMe SSD per node.
My questing is is it maybe a bit too much since some of our shards are pretty small and we may cross 20 shards or fewer per GB of heap memory boundary.

And seeing is this node has too much RAM as well (although it can be used as cache) we were considering splitting those nodes into 4 LXC containers 64GB RAM and 2TB each. Which option would be preferable from elasticsearch perspective?

Bare metal. But honestly, containerising things makes much more logical sense.

  • why use raid0. you should use single disk /data01, /data02, /data03 etc... and elasticsearch will manage them. if you loose one disk you are only loosing 25% of shard on that node. if you use 8TB with Raid0 then one dead disk and you have 100% shard lost for that node.
  • 256 RAM might be overkill and on that sense your logic is write to split it.

I am also in process of setting up same amount of NVME but 98gig ram 20 node cluster.

Performomance vs other benefit

I believe multiple data paths is getting deprecated so raid0 is likely a better option. Recall seeing a discussion about that around here somewhere…

Hi! Thank you for your reply.

@elasticforme Your questing mentions VMs which implies performance overhead much larger then one of LXC (which is not VMs, but Containers), so I'm not sure if reply to your question is applicable here.

whole, I didn't see that anywhere and I do have all my system with multiple data path.

Timur, yes vm/container not same but close because using same resource on same hardware

Christian is right, they are deprecated as of 7.13 and 8.0 will require a node per data path instead:


IMHO this is wrong move.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.