I connect few logstash agent to elasticsearch and it collect under 1GB data
per day, I want collect all data for 1 or 3 day but older I want aggregate.
I don't need data per one minute, enough one per hour for example.
The logstash_index_optimize.py script should help gain back a bit of your
disk space on your old and read-only indices.
Otherwise, unfortunately Logstash and Elasticsearch cannot index data into
per-minute or per-hour summaries. If you are having issues regarding disk
space, I guess there is not other choice but to decrease your rentention
policy or to buy more disks.
I connect few logstash agent to elasticsearch and it collect under 1GB
data per day, I want collect all data for 1 or 3 day but older I want
aggregate. I don't need data per one minute, enough one per hour for
example.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.