Hello, I am a self-funded student with a homelab wherein I am capturing honeypot logs. I have a workstation with SSD storage wherein one disk holds the VMs OS (Ubuntu server 20.04.2 LTS), and the second SSD contains a data partition mounted on the VMs where I've configured elasticsearch to store the data. I'm quickly running out of space, and I have to keep deleting older indices, which would help my final year dissertation. I cannot add/purchase higher capacity SSDs due to technical/financial constraints, respectively. I have a three node cluster with 2 data nodes and one voting node.
Is there a way to automatically move data that is older than one (or more) months to a NAS that I have? It has 7200 RPM HDDs, and using that causes an IOPS block for me. However, if I can mount a partition from here to the VMs and move the data automatically, with compression, I may be able to finish the year without additional expenses.
I know how to mount the partition from the HDDs (I am using iSCSI to add the disks to ESXi & later mount to the VMs)
I need assistance understanding if there is a possibility to compress and save indices wherein IOPS/retrieval time is affected, but data is searchable and retained.
Here is the help I need:
- Will tiering help with compression?
- Can I have two independent paths for indice storage - 1) Hot storage on the SSD and 2) warm/cold storage on the NAS HDD (mounted on the VM)
- Can I use automated way (ILM) to perform this migration?
- Will I be able to "search" (I do not need write) to indices in warm/cold storage?