Cluster stats "Disk Available"

Dear,

Does any one have an idea on how /_cluster/stats computes total disk for the whole cluster ? any specific filter behind the scene on node.roles ?

Bellow is the result of cluster stats API, it show disk space less than the real space disk on all nodes of the cluster.

The sum of total disk from node stats is larger that cluster stats result

image
image (1)

What is the difference?

Converting the value in bytes to TB matches the information.

example in python:

Python 3.10.4 (main, Jun 29 2022, 12:14:53) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> total_in_bytes = 40673342521344
>>> available_in_bytes = 38623244267520
>>> total_tb = total_in_bytes / 1024 / 1024 / 1024 / 1024
>>> available_tb = available_in_bytes / 1024 / 1024 / 1024 / 1024
>>> 
>>> print(total_tb)
36.99218952655792
>>> print(available_tb)
35.12763602659106
>>> 

This is the result of GET _cat/allocation?v
I have a total of 140TB of disk dispatched on all nodes

disk.indices disk.used disk.avail disk.total disk.percent host ip node
362.3kb 419.1gb 7.7tb 8.1tb 5 hot ip es_node01
347.7kb 116.4gb 2.1tb 2.2tb 5 hot ip es_node02
496.4kb 745gb 13.7tb 14.4tb 5 hot ip es_node03
5.7mb 745gb 13.7tb 14.4tb 5 hot ip es_node04
22.3kb 97.8gb 1.7tb 1.8tb 5 hot ip es_node05
1.9mb 745gb 13.7tb 14.4tb 5 hot ip es_node06
40.7mb 419.1gb 7.7tb 8.1tb 5 hot ip es_node07
19.2kb 423.7gb 7.7tb 8.2tb 5 hot ip es_node08
81.9kb 116.4gb 2.1tb 2.2tb 5 hot ip es_node09
2.7mb 116.4gb 2.1tb 2.2tb 5 hot ip es_node10
40.7mb 419.1gb 7.7tb 8.1tb 5 hot ip es_node11
225b 745gb 13.7tb 14.4tb 5 hot ip es_node12
27.9kb 745gb 13.7tb 14.4tb 5 hot ip es_node13
81.9kb 745gb 13.7tb 14.4tb 5 hot ip es_node14
27.9kb 116.4gb 2.1tb 2.2tb 5 hot ip es_node15
351.8kb 419.1gb 7.7tb 8.1tb 5 hot ip es_node16
329.7kb 419.1gb 7.7tb 8.1tb 5 hot ip es_node17
4.8mb 102.4gb 1.8tb 1.9tb 5 hot ip es_node18

Related discussion here
https://discuss.elastic.co/t/cluster-not-reporting-actual-available-space

Does that answer it?

Yes @DavidTurner
I can see also ongoing ticket on github for this problem

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.