Hello everyone,
i got this data through snmp input plugin. After some mutate filter, I need to extract this data, and I chose to use grok filter. The data looks like this (there is blank space in the first line)
Filesystem 1M-blocks Used Available Use% Mounted on
devtmpfs 8002 0 8002 0% /dev
tmpfs 8014 3 8012 1% /dev/shm
tmpfs 8014 793 7222 10% /run
tmpfs 8014 0 8014 0% /sys/fs/cgroup
/dev/mapper/vg.00-root 3904 1338 2346 37% /
/dev/sda1 973 44 863 5% /boot
/dev/mapper/vg.00-conf 1952 6 1828 1% /conf
/dev/mapper/vg.00-tmp 20031 50 18942 1% /tmp
/dev/mapper/vg.00-large 202614 7643 184657 4% /large
tmpfs 1603 0 1603 0% /run/user/1002
tmpfs 1603 0 1603 0% /run/user/1000
tmpfs 4 3 2 58% /mnt/clink1
And what I have done is I created this pattern. My goal is to extract the /large mountpoint only, but I think this pattern is not proper for production. You can see there are so many %{GREEDYDATA}
patterns there. so I'm asking if there is a proper way to extract this data? Thanks
%{GREEDYDATA}\n%{GREEDYDATA}\n%{GREEDYDATA}\n%{GREEDYDATA}\n%{GREEDYDATA}\n%{GREEDYDATA}\n%{GREEDYDATA}\n%{GREEDYDATA}\n%{GREEDYDATA}\n%{GREEDYDATA}\n%{PATH:filesystem}\s+%{INT:1M_blocks:int}\s+%{INT:used:int}\s+%{INT:available:int}\s+%{INT:used_pct:int}\%\s+%{PATH:mountpoint}
this is the result in grok debugger kibana
{
"used_pct": 4,
"available": 184657,
"used": 7643,
"1M_blocks": 202614,
"filesystem": "/dev/mapper/vg.00-large",
"mountpoint": "/large"
}
Logstash version: 8.11.2