How to ingest old logs into slow storage and newer logs into fast storage

I am experimenting with the Elastic Stack on my laptop. I need to ingest a bunch of logs dating back a few years up to current time. I am using logstash to process the logs and send them to elasticsearch. I have limited space on my SSD but plenty of space on my HDD.

I would like to ingest the logs and allocate the indexes to either the SSD or HDD based on the timestamp of the logs, f.ex, if the logs are 2 weeks old or newer, they go to the SSD and if they are 2 weeks old or older, they go to the HDD. Is this possible?

It is, check out https://www.elastic.co/guide/en/elasticsearch/reference/current/shard-allocation-filtering.html

Thank you for your help.

Is this a setting that can be set at the time of index creation in logstash? Or does this have to be done manually after index creation?

You will need to have at laest 2 nodes,
1 node that will use SSD Storage and will be the hot node that receive the newest logs
1 node that will use HDD storage and will be the warm node that receive the old logs

You can do it ahead of time using index templates.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.