I want to be able to grab the output of any SNMP/SNMPTRAP and structure it so it can be exported to a CSV file. I will then take the file and import it to another box running Elasticsearch and Kibana.
Can you please point me to the direction of a grok filter and output section that can help me understand how to do this?
From what I see not too many people in this forum are using the ELK to parse and manipulate SNMP data.
Output lines are for testing, I know I have to do a CSV plugin output from logstash.
Thanks.
My *.conf file:
Blockquote
input {
snmptrap {
host => "0.0.0.0"
community => "public"
port => "162"
}
}
input {
snmp {
get => [".1.3.6.1.2.1.1.3.0", ".1.3.6.1.2.1.2.2.1.2.2"]
hosts => [{host => "udp:10.30.30.1/161" community => "public" version => "2c" retries => 2 timeout => 1000}]
}
}
"iso.org.dod.internet.mgmt.mib-2.system.sysUpTime.sysUpTimeInstance" => 202330,
"iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifDescr.2" => "error: no such instance currently exists at this OID",
Personally I would adjust that using 'oid_root_skip => 6' to get the easier to read
"system.sysUpTime.sysUpTimeInstance" => 198404,
"interfaces.ifTable.ifEntry.ifDescr.0" => "error: no such instance currently exists at this OID",
Unfortunately this is a terrible match for a csv filter. I would suggest something like
That is a simple example config. Not finished entering all the OID info for other devices. I will have numerous devices and traps. What I really need is a filter to organize the data and export it to CSV. For example: use grok filter to grab (hostname, timestamp, interfaces, uptime, memory) and create a CSV file that is organized.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.