High Disk Watermark exceeded on one or more nodes

I'm running an elk + redis stack on this machine, and just started
collecting eventlogs via GELF from a windows server.

I had a look at the logs recently, and this came up:

[2014-12-17 09:31:03,820][WARN ][cluster.routing.allocation.decider]
[logstash test] high disk watermark [10%] exceeded on
[7drCr113QgSM8wcjNss_Mg][Blur] free: 632.3mb[8.4%], shards will be
relocated away from this node

[2014-12-17 09:31:03,820][INFO ][cluster.routing.allocation.decider]
[logstash test] high disk watermark exceeded on one or more nodes,
rerouting shards

I had a look at the size of Elasticsearches logs in /var/ and it's about
23gb -
I see that Elasticsearch has it's own memory heuristics but I'm not
entirely sure how that works, or whether it's affecting this- but the logs
aren't deleting after a week as I thought they should.

Could someone explain to me a bit more about what is going on here?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9e132f09-1fa6-4401-af53-7167fe15c781%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

It looks like this -

What is your actual disk usage? Can you run a curl -XGET
localhost:9200/_cluster/settings and see if it mentions those settings?

On 16 December 2014 at 23:28, Pauline Kelly pauline.m.kelly1@gmail.com
wrote:

I'm running an elk + redis stack on this machine, and just started
collecting eventlogs via GELF from a windows server.

I had a look at the logs recently, and this came up:

[2014-12-17 09:31:03,820][WARN ][cluster.routing.allocation.decider]
[logstash test] high disk watermark [10%] exceeded on
[7drCr113QgSM8wcjNss_Mg][Blur] free: 632.3mb[8.4%], shards will be
relocated away from this node

[2014-12-17 09:31:03,820][INFO ][cluster.routing.allocation.decider]
[logstash test] high disk watermark exceeded on one or more nodes,
rerouting shards

I had a look at the size of Elasticsearches logs in /var/ and it's about
23gb -
I see that Elasticsearch has it's own memory heuristics but I'm not
entirely sure how that works, or whether it's affecting this- but the logs
aren't deleting after a week as I thought they should.

Could someone explain to me a bit more about what is going on here?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/9e132f09-1fa6-4401-af53-7167fe15c781%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/9e132f09-1fa6-4401-af53-7167fe15c781%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_yDVNDNW3Pkyibji6Mxau1kwK95SYCOek39g5OzH19-A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.