How to get the filesystem utilization via metricbeat?

Hi team,

May I know how to get the filesystem utilization in Linux? Really don't know what does these 2 information means?

You should find some more details in the fields docs:

Hi Ruflin,

I have read this explain, but as you known, in Linux, there can be multi types of file system, like local disk, SAN, NAS,... So "The total disk space in bytes." This disk means?
And during the display for bytes, it's not user friendly. But when I want to transfer it to GB, there is number overflow. So usually, how do you show those byte in the Kibana?
Refer to: Kibana Number Overflow

Did you load the template for metricbeat into elasticsearch? This should happen automatically if you connected to elasticsearch directly and do not disable it.

Did you load the index pattern with the import-dashboard script into Kibana? If yes, the format type should be bytes and you would not have to do the conversion yourself.

Related to filesystem: Metricbeat sends one event for each file system it finds. So if you have 3 disks, you will get three events and each will have the total of the disk the metric is for inside. Here you see an example of such an event with the mount_point and device_name which tells you, which disk it is:

Out data flow is metricbeat -> Logstash -> Elasticsearch. So not sure how to do this?

Yes. But we hope not to use byte. Is there any place we can change them to MB or GB ?

For the last point, thanks so much, we will udpate our metricbeat accordingly.

I see, you are going through Logstash. You can start a local metricbeat instance and connect it to elasticsearch and start it up once. It will automatically load the template. Otherwise you can also load it manually: The template file is in the downloaded metricbeat directory.

What do you mean you don't want to use byte? You mean that the value is stored in MB or only showed in MB? The second one should happen when loading the correct index pattern.

Hi Ruflin,

May I know what's the purpose about this template?

Better to stored in MB. Since we also use watcher to send the alart, which the byte will be not friendly to be display.

The reason of this template is to predefine the field types in elasticsearch. So in case first value is 0 and after that 1.2, elasticsearch would make this an integer instead of float (just an example).

There is currently no option to change it MB. But this sounds like a feature request for watcher to make such conversions automatically if it knows it is bytes.

got it. so if for display only, is there any example way to display the byte in Gb/Mb which is larger than 16gb? since for small number, painless format doc[].vaule works well. BUT it does not work for bigger number.

Can you elaborate on that? How does painless play a role here?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.