the following situation is given:
Cluster: 8 Data nodes, each of them with ~24TB dedicated Elastic datastore
Daily Data: ~250GB Primary Data, ~520.000.000 Documents
The indices are written daily, index template is currently set to 8 primary shards (31,5 GB per shard) with 3 replicas for search performance.
Current problem: If my calculations are correct, I can save the data for round about 192 days. But it is necessary to store them longer. Closing indices is not an option, as there could be the need to search sth. over e.g. a whole year. Furthermore, this kind of logs are the most, but not the only logs. Other logs are written too into another index prefix (but will be deleted after an amount of time). So there should be still a litte space for more data.
How would (or are) you handle the indices to increase the time, but also still be able to search over a bigger time range? If I make a fulltext search without fields over the last 24h, I am currently at ~40 seconds.