Elasticearch reports wrong low disk condition

Hi guys.

OK I'm having an unusul problem with Elasticsearch. Where it's reporting a low disk condition on one of the nodes. And it keeps reporting low disk conditions even after I've cleared out enough disk space by deleting / gzipping old log files. And it continues to do so even after restarting the elastic search service on all nodes.

Here's the complaint in the logs:

[2015-07-16 16:50:20,402][INFO ][cluster.routing.allocation.decider] [JF_ES1] low disk watermark [85%] exceeded on [TswNkBU1QE2_Q_ZAR_Lprg][JF_ES2] free: 2.4gb[12.4%], replicas will not be assigned to this node

I've installed and setup Marvel. And it's reporting the exact same thing! 2.4GB free of disk space on node 2.

But when I take a look at the real conditions on the disk with the df command, I see a different story on that node:

[root@es2:~] #df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/vda1                         20G   17G  3.1G  85% /
devtmpfs                         742M     0  742M   0% /dev
tmpfs                            750M     0  750M   0% /dev/shm
tmpfs                            750M   81M  669M  11% /run
tmpfs                            750M     0  750M   0% /sys/fs/cgroup
nfs1.example.com:/var/nfs/home   20G   12G  6.9G  64% /home

So what's going on with that log message and the message in the marvel kibana interface? How do I correct this?

Thanks,

Where do you have your Elasticsearch data? The log message shows:

And df -h shows at ~85% for /:

You should try to reduce it below 85% and still see it reports the same.