Using Ceph with Elasticsearch

Does anyone have experience with using Ceph as storage for Elasticsearch?

I am looking for a way to make the storage part more fault tolerant on OS level. I know you can use multiple replicas for this, but I am investigating a way to prevent shard failures because of failing disks or raid controllers.

It'd be really slow as you're running a distributed application on a distributed FS.

Why not just let ES handle it with shards and replicas?

Hi Mark,

Here goes... Currently we have servers with 7 disks of ES data. Each datanode gets a data.path list of all 7 local disks. With ES 1.7 data for a single shard is spread over all disks using 'least used'. With 2.0 data for a single shard will be put on 1 disk, not all 7. Great for resiliency, because it solves 'partially failed shards' because 1 of 7 disks died.

But ... having data for 1 shard all on 1 disk, makes that disk hot while indexing. Much hotter than with 1.7, because that version spreads write ops over all disks. So with ES 2.0 we are thinking spinning disks will not work anymore.

And so we are investigating alternatives, except throwing in flash storage...

Regards!

Have you tested it?
What is your sharding strategy?
What is your load?

No, we are investigating options by reading documentation and talking to people.

Check https://www.elastic.co/guide/en/elasticsearch/reference/2.0/setup-dir-layout.html for the '1 shard on 1 data.path' feature.

Regarding your other questions:

  • daily indices
  • 5 shards for biggest index, 3 shards for others
  • all have 1 replica
  • 20-40k docs/sec indexing

And you?

I don't really run clusters like this anymore unfortunately, but I do know ES a bit :smile:

I do think that you're over worried though, CEPH will likely kill performance more than hitting a single disk (don't forget you need to hit the network with CEPH), and without testing it's hard to say that it'll kill a disk.

Why not just use hardware/software RAID?

I cannot add new disks to the existing servers, so RAID will require me to replace all disks with larger ones (increasing total seek latency) or add more servers. RAID is on the table, but has pros and cons.