Docker and disk size

Hi, running ES 6.1.0

I have 10 nodes: 3 masters, 3 data, 3 coordinators, 1 ingest. The 10 nodes are running in docker on 3 hosts.

The 3 hosts have 1.7TB of disk available.

Kibana monitoring on the main cluster page claims that I have Disk Available: 17TB / 17TB (99.69%)

Is that normal, does it affect the functionality in anyway?

This looks wrong, how much disk are your elasticsearch nodes supposed to see in total?

For the record, it should not be a problem until your nodes run out of disk space since Elasticsearch will not stop allocating new shards like it should do when a node runs out of disk space.

Like I said, the hosts have 1.7 TB each, there is 3 hosts. There is 10 nodes running on the 3 hosts inside docker. Also only the master and data nodes have volumes mounted.

OK, I thought 1.7TB was the free space. @dliappis I'm not familiar with Docker, does it ring a bell to you?

@dliappis Hi, do you have any news on this?

Thanks

AFAIU there is no special Docker magic involved here.

Basically you have 10 Elasticsearch processes running, spread across 3 hosts. Each host has 1.7TB of free disk space, so total disk space reported as available is 10 x 1.7 = 17TB.

The % free will be always correct of course and this is what matters for the allocation algorithms and monitoring.

Btw even if you run the Elasticsearch docker image without bind mounting any volume, the container will report available disk space for / based on the storage driver, e.g. in my case when using overlay the container reports the actual disk space free on my laptop:

$ docker run --rm -ti docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.1 /bin/bash -c "df -h /"
Filesystem      Size  Used Avail Use% Mounted on
overlay         339G  213G  109G  67% /
1 Like

This is indeed not specific to Docker. The cluster stats that populate these metrics in Kibana are de-duplicated by publish address (which is not always effective if the nodes on the same host in fact have different publish addresses). See: #24472.

Ok, that's cool as long as allocation is not affected...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.