Hi all,
I have three disks, want to get three disks in kibana but I am able to get only one disk. I have added the name of those disk in system.yml configuration file but still, only one/dev/xvda1
is getting in kibana.
My Disks are:
[vinit@ip-XXX-XX-X-XXX modules.d]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 99G 30G 69G 31% /
devtmpfs 2.0G 56K 2.0G 1% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
System.yml configuration is:
# Module: system
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/6.5/metricbeat-module-system.html
- module: system
period: 10s
metricsets:
- cpu
#- load
- memory
#- network
#- process
#- process_summary
#- core
#- diskio
#- socket
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|devtmpfs|tmpfs|host|lib)($|/)'
- module: system
period: 15m
metricsets:
- uptime
#- module: system
# period: 5m
# metricsets:
# - raid
# raid.mount_point: '/'
And I'm getting in kibana is:
Does anyone have any idea how to get these disks using metricbeat to logstash to kibana?
Thanks in advance for suggestions.
jsoriano
(Jaime Soriano)
January 30, 2019, 1:04pm
2
Hi @Vinit_Kumar ,
Notice that default configuration drops events from mountpoints that are usually virtual filesystems, this includes devtmpfs
and tmpfs
:
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|devtmpfs|tmpfs|host|lib)($|/)'
There is also a filebeat.ignore_types
option that in Linux defaults to ignore all filesystem types marked as nodev
in /proc/filesystems
. You can override this option by setting an specific set of filesystem types to ignore. The list of ingored types is logged at the info level on metricbeat startup, with a line starting by Ignoring filesystem types:...
.
Hi Jamie,
I am having the same issue as well. Don't see all filesystems being exported. I am using the "filesystem.ignore_types" option in the config.
module: system
period: 1m
metricsets:
- filesystem
#- fsstat
#filesystem .ignore_types: [devpts,sysfs,cgroup,tmpfs,proc,tmpfs,autofs,mqueue,binfmt_misc]
filesystem.ignore_types: [sysfs, proc, devtmpfs, securityfs, tmpfs, devpts, cgroup, pstore, configfs, systemd-1, hugetlbfs, mqueue, debugfs, binfmt_misc, sunrpc]
fields:
env: sandbox
beat_type: metricbeat
fields_under_root: true
processors:
- include_fields:
fields: [ "metricset.module", "metricset.name", "env", "beat_type", "system.filesystem.device_name", "system.filesystem.mount_point", "system.filesystem.used.pct", "system.filesystem.type" ]
- drop_fields:
fields: ["host"]
jsoriano
(Jaime Soriano)
February 5, 2019, 2:00pm
5
Hi @cchenna ,
Could you give more details about the filesystems you cannot see?
Hi Jaime,
I don't see the nfs filesystem and few local filesystem disks.
Thanks,
Chenna.
Hi @cchenna ,
I commented this below processor completely then only I was able to see all the disks.
#processors:
#- drop_event.when.regexp:
#system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|devtmpfs|tmpfs|host|lib)($|/)'
Try this also.
Hi Vinit,
I tried this and it didn't work, below is the config i am using.
module: system
period: 1m
metricsets:
- filesystem
#- fsstat
#filesystem .ignore_types: [devpts,sysfs,cgroup,tmpfs,proc,tmpfs,autofs,mqueue,binfmt_misc]
filesystem.ignore_types: [sysfs, proc, devtmpfs, securityfs, tmpfs, devpts, cgroup, pstore, configfs, systemd-1, hugetlbfs, mqueue, debugfs, binfmt_misc, sunrpc]
fields:
env: sandbox
beat_type: metricbeat
fields_under_root: true
processors:
#- drop_event.when.regexp:
#system .filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
- include_fields:
fields: [ "metricset.module", "metricset.name", "env", "beat_type", "system.filesystem.device_name", "system.filesystem.mount_point", "system.filesystem.used.pct", "system.filesystem.type" ]
- drop_fields:
fields: ["host"]
system
(system)
Closed
March 6, 2019, 5:18pm
9
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.