it's possible that the docker cli and the Metrics UI look at different time intervals and calculate the averages differently. Can you compare the change of the values over time from both tools to see if that shows any correlation? Does the node details page show the same deviating values?
I am not using node, it is a Dart application. By looking at the Observatory page, the memory usage is close to the Docker stats.
I analyzed a range metric and it is the same behavior.
Digging the Kibana dashboard, it seems that the percentage is used + cached memory and that may be the reason for the discrepancy.
Sorry for using ambiguous terms here. With "node" I meant the box on the inventory screen. It might be a host, a container or a pod depending on the selected view.
it seems that the percentage is used + cached memory
Your hypothesis sounds plausible. The metricbeat module collects the stats via the docker API for which I found the following note in the docker stats docs:
On Linux, the Docker CLI reports memory usage by subtracting cache usage from the total memory usage. The API does not perform such a calculation but rather provides the total memory usage and the amount from the cache so that clients can use the data as needed. The cache usage is defined as the value of total_inactive_file field in the memory.stat file on cgroup v1 hosts.
On Docker 19.03 and older, the cache usage was defined as the value of cache field. On cgroup v2 hosts, the cache usage is defined as the value of inactive_file field.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.