If per day 4gb size logs from logstash in one index then how many days it will be stored in ES?
Elasticsearch does not automatically delete any data, so unless you use time-based indices (e.g. data stream) with ILM, data will be retained forever (or until shard or disk limits reached).
A single shard can hold up to 2 billion documents, but the practical limit may be lower than that if you want to have performant queries.
Thank you for your response. but if per day 4gb logs from logstash then how long it stored in 1TB disk server. Is is possible to store that logs for 3years in one index?
If you have time based data coming into Elasticsearch, e.g. logs and/or metrics, you should use a data stream (not a single index) and manage retention with Index Lifecycle Management (ILM). How much space data takes up on disk compared to the size of the raw data it sepends on mappings, index settings and the average shard size.
but I want to create dashboard also, then how to choose the data stream index
You query through an index pattern or read alias that matches all indices in the data stream. This is the standard way to store this type of data so well supported by Kibana.
If am create 12 indices for per size is 120gb for all through the data stream in month wise. then it may slow the elasticsearch or any issues in production side?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.