To collect and analyze our logfiles from 15 server we use elasticsearch 2.4.2, logstash 2.4.0, kibana 4.6.1 and filebeat and everything works fine. In normal case the size of our daily indexes is about 1GB.
Last week, the size of the daily log was many times higher than the normal case on 6 server (30-40GB on each server). As a result the cluster state changes to red, because there was no disk space.
Is it possible to prevent such a case? It was a really unexpected behaviour.
There's currently no way to do this.
Are you monitoring your disk space on each of the nodes?
Thanks, you have confirmed my intuition.
Yes we are monitoring disk space on our nodes.
Same problem yesterday
My actual workaround: check cluster state or monitoring result in the morning, if red delete the "monster" index
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.