Elasticsearch supports multiple not RAID 0 Data Paths?

Hello,

we are currently running our Elasticsearch on 1 single node and get about
20 Million Logs per Day(40 GB/daily indices). Since this is a much Stuff to
handle with, the indices takes a lot of disk space on our server.

What we like to implement:

  • 1 Data directory which is stored on our SSDs and contains the indices of
    the last 7 days for quick access.
  • 1 Data directory which is stored on normal HDDs and contains indices of
    last 3 months for normals speed acces.
  • 1 Data directory which is stored on slow 5400 rpm HDDs and contains the
    indices of the last 2 years for access if needed.

Well it´s not problem to tell ES multiple data paths but if you do this, ES
will stripe (RAID 0) the indices on all 3 data directories.

But thats not what we want. We want do copy the indices with a script to
the matching directories ( a index which is older than 8 days gets
automatically moved to normal HDDs and so on).

Is there any way to make this work?

Thanks for your feedback.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e8e34044-5895-4ccd-bac4-5ef11ea81204%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Assuming these are all in the same server; You can't do this unless you run
multiple instances and then tell each instance which directory (mount) to
store the data.

You'd then need to use something like this

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 15 May 2014 20:57, horst knete baduncle23@hotmail.de wrote:

Hello,

we are currently running our Elasticsearch on 1 single node and get about
20 Million Logs per Day(40 GB/daily indices). Since this is a much Stuff to
handle with, the indices takes a lot of disk space on our server.

What we like to implement:

  • 1 Data directory which is stored on our SSDs and contains the indices of
    the last 7 days for quick access.
  • 1 Data directory which is stored on normal HDDs and contains indices of
    last 3 months for normals speed acces.
  • 1 Data directory which is stored on slow 5400 rpm HDDs and contains the
    indices of the last 2 years for access if needed.

Well it´s not problem to tell ES multiple data paths but if you do this,
ES will stripe (RAID 0) the indices on all 3 data directories.

But thats not what we want. We want do copy the indices with a script to
the matching directories ( a index which is older than 8 days gets
automatically moved to normal HDDs and so on).

Is there any way to make this work?

Thanks for your feedback.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e8e34044-5895-4ccd-bac4-5ef11ea81204%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/e8e34044-5895-4ccd-bac4-5ef11ea81204%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Yr3MpLCi2wiRmogkdozPFG5_7WZnD3un-dEJhU_mToiA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.