Raid 0 SSD?

I was considering setting up servers, with the data disk being raid 0.. since we have a replica on another server.. I figured that would a good way to save a lot of money (SSD's is by far the most expensive part of a new cluster setup).

I figured I'd use 1TB disks - and have approx 10 pcs. per server.. that ofcourse means that the risk of failure is ~X10.. since only of the 10 disks needs to fail, for the entire servers datastore to fail.

Does anyone have any experience with doing that.. or is it just a stupid idea? :slight_smile:

Why not use multiple path.data entries and let ES "stripe" it?

interesting.. so I should simply just present EACH SSD disk.. on seperate mount points.. and then let ES handle it.. meaning I'd only loose the shards on one disk.

Only issue with that - is if one shard ever grew above ~960GB (the size of one SSD).. but that should be very unlikely.. since I'm using daily indices and at minimum 4 shards per index.

Can elasticsearch handle distribution disk usage over multiple paths like that? that would be pretty sweet.

Anyone using something like that?

Someone tried to ask the same here - with no real answer.. Using RAID 0 vs multiple data paths after commit #10461

it seems no one is actually using the multiple paths approach.. even though with *TB cluster sizes - thats a lot of money saved if its more stable than raid 0 (which has 10X the risk - with 10 disk backing it- of failure).

@warkholm - it seems ES no longer stripes shards.. avoiding striping causing shard failures because all shards are spread over many paths.. https://github.com/elastic/elasticsearch/issues/9498 - so from 2.0+ it should be good to have multiple paths.. and you'll only loose parts of your data, when a disk fails (instead of an entire raid 0)

how write performance is - compared to shards.. it should be much worse.. depending on how well ES shares writes over shards.. and with multiple paths PER server.. it would probably make sense to have more shards per index ?

6 servers with 10 paths = 60 disks.. and if we want to share the write load as well as possible.. we'd need a lot more shards than the usual 6.. so write performance will suffer, compared to raid 0.

Personally I prefer raid 0 to multiple data paths, even with 2.0's fixes. I see it as a performance vs safety tradeoff. Usually I'm fine with the safety that comes from sharding. But I agree that it is a nice tradeoff to be able to make. I wouldn't for example, raid 0 four disks together. It is just too much bother.

If you have a hot/warm setup where you have historical data in Elasticsearch you could use multiple data.paths to spinning disks on nodes with a dozen disks for the warm nodes and raid 0 to two or three SSDs for your hot nodes.

I think one shard getting bigger than a whole SSD isn't a good argument. You have other problems if you let a shard get that big like recovery time. I'd shoot for shards an order of magnitude smaller than your SSDs.

Why multiple paths, addressing each SSD as single disk?

If you use RAID 0 on hardware controller (not the OS based crap), you can multiply the speed of each disk. E.g. 8 disks on RAID 0 give 8x speed (assuming the HW controller can cope with that transport capacity, e.g. a 12Gb/s SAS controller)

In my setup, indexing speed is crucial. RAID0 pays you back every EUR you invest in SSD.

I have one server - with 24 pcs. of 240GB disk.. in raid 10. I've had 2 disks fail now - within a few months. with just 10x times - the MTBF is rather high - and one disk - and the entire raid 0 is out.. that would be avoided if you used each disk seperately. But then you don't spread write load over many SSD's as you also correctly note.. It would be nice to know what people have good experience with :slight_smile:

How do I make elasticsearch migrate shards away from the hot area every night? If thats possible, that could be a good solution - to split up into raid0 + LongTerm storage (spinning disks + perhaps som SSD cache)

Take a read of https://www.elastic.co/blog/hot-warm-architecture

The only way to do that on a single node is to run multiple ES instances.

so what nik9000 suggested actually isn't possible ?

You can do that, you just need the different storage attached to different nodes.

hmm. can I tell ES to move index from one host to another.. or do I have to dump and restore (+some routing to ensure it doesn't end up on the same node as before :)?

Yes, you can by setting node attributes and route indexes to certain nodes
https://www.elastic.co/guide/en/elasticsearch/reference/2.3/shard-allocation-filtering.html

Read that blog post I posted.

Link? googleing "Mark Warkom" blog gives no relevant hits

I think he meant the link he posted here: Raid 0 SSD?

1 Like