we are currently running our Elasticsearch on 1 single node and get about
20 Million Logs per Day(40 GB/daily indices). Since this is a much Stuff to
handle with, the indices takes a lot of disk space on our server.
What we like to implement:
- 1 Data directory which is stored on our SSDs and contains the indices of
the last 7 days for quick access.
- 1 Data directory which is stored on normal HDDs and contains indices of
last 3 months for normals speed acces.
- 1 Data directory which is stored on slow 5400 rpm HDDs and contains the
indices of the last 2 years for access if needed.
Well it´s not problem to tell ES multiple data paths but if you do this, ES
will stripe (RAID 0) the indices on all 3 data directories.
But thats not what we want. We want do copy the indices with a script to
the matching directories ( a index which is older than 8 days gets
automatically moved to normal HDDs and so on).
Is there any way to make this work?
Thanks for your feedback.
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e8e34044-5895-4ccd-bac4-5ef11ea81204%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.