AFAIU there is no special Docker magic involved here.
Basically you have 10 Elasticsearch processes running, spread across 3 hosts. Each host has 1.7TB of free disk space, so total disk space reported as available is 10 x 1.7 = 17TB.
The % free will be always correct of course and this is what matters for the allocation algorithms and monitoring.
Btw even if you run the Elasticsearch docker image without bind mounting any volume, the container will report available disk space for
/ based on the storage driver, e.g. in my case when using
overlay the container reports the actual disk space free on my laptop:
$ docker run --rm -ti docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.1 /bin/bash -c "df -h /"
Filesystem Size Used Avail Use% Mounted on
overlay 339G 213G 109G 67% /