Problems with flood stage watermark setting

(David9) #1

I'm doing some experimentation on my Mac. I have no data in ES. I create an index and add a few small documents (using the python elasticsearch_dsl library).

I then get this in the stderr after a few documents successfully get indexed:

[2019-03-15T17:50:45,215][WARN ][o.e.c.r.a.DiskThresholdMonitor] [qEP99z5] flood stage disk watermark [95%] exceeded on [qEP99z5lRXqLTZ3oNljwqQ][qEP99z5][/usr/local/var/lib/elasticsearch/nodes/0] free: 24.9gb[2.6%], all indices on this node will be marked read-only

I don't know how to adjust the following settings to get this to stop. I have 100 GB free on my laptop.

cluster.routing.allocation.disk.watermark.flood_stage: 95%
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%

Ideally, if I could index about 10GB of data, I'd be happy.

(David Pilato) #2

what about this?

cluster.routing.allocation.disk.threshold_enabled: false
(David9) #3

thanks, I'll try it.

(system) closed #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.