Configuring system.filesystem to show useful mount stats

Hi there,

I'm trying to configure metricbeats to show disk usage, particularly for /root /boot /data mount points. I've set up metricbeats.yml like this for system.filesystem module:

- module: system
  enabled: true
  period: 1m
  metricsets:
    - filesystem
    - fsstat
  processors:
  - drop_event.when.regexp:
      system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host)($|/)'

But for some reason, I'm only getting these outputs.


Here is "sudo df -h" which shows how the disks are on my system.

sudo df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                  16G     0   16G   0% /dev
tmpfs                                3.1G  317M  2.8G  11% /run
/dev/mapper/cdc--alln--001--vg-root   98G   11G   83G  12% /
tmpfs                                 16G     0   16G   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                 16G     0   16G   0% /sys/fs/cgroup
/dev/sda1                            472M  155M  293M  35% /boot
/dev/sdb1                             10T  7.4T  2.1T  78% /data
overlay                               10T  7.4T  2.1T  78% /data/docker/overlay2/9e8bb119e90eec7c3e686ffe74b056a6824e45d91ae24022fd949557f7d31f6c/merged
shm                                   64M     0   64M   0% /data/docker/containers/21367ab218b79810de093e121d365b03f62f394e1bc1dd1b5717d2da83fe3805/mounts/shm
overlay                               10T  7.4T  2.1T  78% /data/docker/overlay2/ea308c7faf25ab2fcf3b576fa2c8a19525f380b1af2167fe746ff14e148ae3bf/merged

I've also tried this with using filesystem.ignore_types: [sysfs, proc, devpts, securityfs, cgroup, systemd-1, hugetlbfs, mqueue, fusectl, lxcfs, overlay, shm, nsfs, binfmt_misc, tracefs, udev, tmpfs]

to try and isolate only the mounts/filesystems I'm interested in. Unfortunately it isn't working either.

Hi @sgreszcz,

It looks like you are running metricbeat in docker. Could you try to add hostfs to the list of mount points in the drop_event processor? It'd be something like:

processors:
  - drop_event.when.regexp:
      system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|hostfs)($|/)'
1 Like

Hi Jaime,

I'm actually trying to use the filter so metricbeat ignores the filetypes before parsing rather than filtering after the fact. This should work no? The docs days so: https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-metricset-system-filesystem.html

- module: system
  enabled: true
  period: 1m
  metricsets:
    - filesystem
    - fsstat
  filesystem.ignore_types: [hostfs, sysfs, proc, devpts, securityfs, cgroup, systemd-1, hugetlbfs, mqueue, fusectl, lxcfs, overlay, shm, nsfs, binfmt_misc, tracefs, udev, tmpfs, debugfs. autofs, devmpfs, fuse.lxcfs, pstore]
#  processors:
#  - drop_event.when.regexp:
#      system.filesystem.mount_point: '.*'

I'll try to do it with a drop_event and see if that works...

Indeed some of the filesystems that appear in your screenshot should be ignored, for example the filesystems of the docker namespaces are of type nsfs. What version of metricbeat are you using?

Tell us in any case also if the drop_event processor works for you.

I'm using the latest metricbeat 6.3 and a standard Ubuntu LTS16.04 as the base system we are building upon.

Here is what I get when I "sudo df -h":

sudo df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 7.7G     0  7.7G   0% /dev
tmpfs                                1.6G  158M  1.4G  11% /run
/dev/mapper/cdc--alln--stg--vg-root   16G  9.6G  4.9G  67% /
tmpfs                                7.7G     0  7.7G   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                7.7G     0  7.7G   0% /sys/fs/cgroup
/dev/sdb1                            197G   13G  175G   7% /data
/dev/sda1                            472M  204M  244M  46% /boot
overlay                              197G   13G  175G   7% /data/docker/overlay2/88e3f9644ecb4659208c33e32458186e40faa584294b1f661661ee8ccb31ce8f/merged
overlay                              197G   13G  175G   7% /data/docker/overlay2/f10c4974d9779092369bec9b6bd4265fa641d5fb6841ca87cacceaf489b29c55/merged
overlay                              197G   13G  175G   7% /data/docker/overlay2/aa155bac0b2f742b47a68a959113bce827dd0175e4c319333ba72d52fbac9e7a/merged
overlay                              197G   13G  175G   7% /data/docker/overlay2/6194de50e03f05fcc96c2eeb6e23b2d1ca1fea3a3a1d48d610e5227205efe4ff/merged
overlay                              197G   13G  175G   7% /data/docker/overlay2/b0aad6c345f8523d28beb2261d000353a03b08eb26e7368e28f9cbd281972f9b/merged
shm                                   64M   64K   64M   1% /data/docker/containers/2f9c2b74197c6a42b3de11c13937f763b8d3df49b52038b28ed73a6cf130e236/mounts/shm
shm                                   64M     0   64M   0% /data/docker/containers/02f08fe30ed9f45746f4125867b7fb718fb23ff422e39bb0190e2a8a099400f6/mounts/shm
shm                                   64M     0   64M   0% /data/docker/containers/1db10faa880ba1cdaac00bc4962bd1ab672e987a07d3eaa02b81fc63af3074c2/mounts/shm
shm                                   64M     0   64M   0% /data/docker/containers/bd4b65647675e33748bd780998991d3f822b87c50e48cc632501c6aa98eb6150/mounts/shm
shm                                   64M     0   64M   0% /data/docker/containers/ac124ea897ec816a2a853a16ce22ff953604309b11356f54597ae6046248990d/mounts/shm
overlay                              197G   13G  175G   7% /data/docker/overlay2/56b42a2c1bb01900bdc4d6aac3caa1c47309d04f12fa875a05acdcdb1e00497f/merged
shm                                   64M     0   64M   0% /data/docker/containers/3ec9b5d75b569d16bc6b90819cb98ec66e79fbd3d1ea4b332b970b993c26208e/mounts/shm
tmpfs                                1.6G     0  1.6G   0% /run/user/1010
overlay                              197G   13G  175G   7% /data/docker/overlay2/46542dbedd15010709789810cfb477ab45fe3bb0a27edf25ae00db9ba00fbf4f/merged
shm                                   64M     0   64M   0% /data/docker/containers/6e48b526dee7258c5332fdd7e91fc8ebbca718a331768e552895557f186f8876/mounts/shm
tmpfs                                1.6G     0  1.6G   0% /run/user/3007

but I'm only interested in these 3:

/dev/mapper/cdc--alln--stg--vg-root   16G  9.6G  4.9G  67% /
/dev/sdb1                            197G   13G  175G   7% /data
/dev/sda1                            472M  204M  244M  46% /boot

In metricbeats system.filesystem fields I get:
system.filesystem.device_name: nsfs, /dev/mapper/cdc--alln--stg--vg-root, /dev/sda1, /dev/sdb1, lxcfs
system.filesystem.mount_point: /etc/hosts, /hostfs, /hostfs/boot, /hostfs/run/docker/netns/2b16a598038b (and many other docker mounts), /hostfs/var/lib/lxcfs
system.filesystem.type: nsfs, ext4, ext2, fuse.lxcfs

Kibana default visualisation uses system.filesystem.mount_point which isn't great as it seems to be looking at the /hostfs points. Also it doesn't update very well when refresh is on (sometimes sets to 0%):

Telegraf/InfluxDB/Cronograf seems to get disk space monitoring right out of the box so I'm not sure what linux system parameters they are monitoring:

Funny that metricbeats seems to be finding the correct system.filesystem.device_name (/dev/sda1, /dev/sdb1, /dev/mapper for boot) from the Ubuntu host system.

But it isn't finding the respective Ubuntu system mount points with system.filesystem.mount_point (/, /data, /boot)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.