For the record, it should not be a problem until your nodes run out of disk space since Elasticsearch will not stop allocating new shards like it should do when a node runs out of disk space.
Like I said, the hosts have 1.7 TB each, there is 3 hosts. There is 10 nodes running on the 3 hosts inside docker. Also only the master and data nodes have volumes mounted.
AFAIU there is no special Docker magic involved here.
Basically you have 10 Elasticsearch processes running, spread across 3 hosts. Each host has 1.7TB of free disk space, so total disk space reported as available is 10 x 1.7 = 17TB.
The % free will be always correct of course and this is what matters for the allocation algorithms and monitoring.
Btw even if you run the Elasticsearch docker image without bind mounting any volume, the container will report available disk space for / based on the storage driver, e.g. in my case when using overlay the container reports the actual disk space free on my laptop:
$ docker run --rm -ti docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.1 /bin/bash -c "df -h /"
Filesystem Size Used Avail Use% Mounted on
overlay 339G 213G 109G 67% /
This is indeed not specific to Docker. The cluster stats that populate these metrics in Kibana are de-duplicated by publish address (which is not always effective if the nodes on the same host in fact have different publish addresses). See: #24472.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.