I'm still pretty new to this stack, looking for pointers. We have a
production 8 node cluster which is keeping 14 days worth of indices open.
Keeping open more indices seems to require more memory than we have
available, we use the elasticsearch-curator script to close indices
older than 14 days.
People are looking to be able to search (via Kibana) data from several
months back, I've read about snapshots and was thinking
I'd like to start moving snapshots to Amazon/S3 storage and then spin up a
Kibana/Elasticsearch pointing to the data living there.
Is this a good methodology? What exactly is the procedure for doing this?
Can Elasticsearch read snapshots directly?
You cannot read data that is stored in a snapshot.
If you want the data available then you will need another node(s), you can
get something smaller and with a lot of
disk and move the indices there and then close them, or you will have to
snapshot and then restore when required.
I'm still pretty new to this stack, looking for pointers. We have a
production 8 node cluster which is keeping 14 days worth of indices open.
Keeping open more indices seems to require more memory than we have
available, we use the elasticsearch-curator script to close indices
older than 14 days.
People are looking to be able to search (via Kibana) data from several
months back, I've read about snapshots and was thinking
I'd like to start moving snapshots to Amazon/S3 storage and then spin up a
Kibana/Elasticsearch pointing to the data living there.
Is this a good methodology? What exactly is the procedure for doing this?
Can Elasticsearch read snapshots directly?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.