I am using ELK stack in UAT environment along with the filebeat functionality (installed in various webservers).
While using it, i could see that directory /elasticsearch-5.2.2/data/nodes directory is getting increased drastically which results in crashing the server where elasticsearch is installed. So is there any way to handle such big data automatically . I am very new to elasticsearch. What i want to delete the older data indices automatically in regular intervals.
Thanks for your reply. It is being crashed because of 100 % disk utilization as data/nodes directory is getting increased in size quickly. 50 GB is consumed in a day.
Also, Can you please help me to use elasticsearch curator step by step as i am very new to this. Will it manage the indices logs automatically?
The space data takes up on disk will depend on your mappings, and you can often save a lot of space by optimising these. This is discussed in this blog post as well as in the documentation.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.