I deploy an ELK and Filebeat system on Centos 7, use Curator to remove old indices from Kibana. But the size of the data directory keeps growing so I conclude that this directory stores old indices yet, even those I removed from Kibana. My question is: Is there a way to shrink this directory? More precisely is there a way to only keep data that I visualize on Kibana?
This is not true if you delete the index from ES it removes data and indices both. could you please tell me how you conclude this??
You can shrik your index with the use of Shrink API in elasticsearch.
Please find link of Shrink API for you reference.
Thanks for your response
I concluded this because thanks to Curator I keep 2 days old indices so technically the size of the data directory should keep approximately the same size. And you're right files are removed but the size of the directory keeps growing and I checked Curator does its job well and we have approximatively the same amount of logs every day.
Hi @jsandri, this seems surprising. When you delete an index Elasticsearch should delete the associated data from disk quite soon afterwards.
Can you share some actual figures? How big is the data directory, and how are you measuring this? How does that compare to the sizes of the individual indices reported by
GET /_cat/indices?bytes=b&v? What about the individual shard sizes reported by
GET /_cat/shards?bytes=b&v? Can you share the outputs here?
You can see a complete picture of the disk usage with this command:
I agree with @DavidTurner on this. What you are describing is very much outside normal behavior for Elasticsearch.
I want to know what your storage backend is. What kind of storage are you using? Spinning disks? SSDs? And what is the filesystem type?
Can you verify how long you are keeping indices in the cluster through the_cat/indices API?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.