Cannot see SNMP trap message in Kibana

Hi,

I am running

Kibana 4.3.0
logstash 2.0.0
elasticsearch 2.4.5

Here is my logstash config for snmp traps:

input {
  snmptrap {
    type => "snmptrap"
    host => "0.0.0.0"
    port => 162
  }
}

filter{
   ruby {
     code => "
     event.to_hash.keys.each { |k| event[ k.gsub('.','_') ] = event.remove(k) if k.include?'.' }
     "
   }
}

output {
  elasticsearch { hosts => ["127.0.0.1:9200"] }
  stdout { codec => rubydebug }
}

I am able to see SNMP trap messages on stdout

{
                                       "message" => "#<SNMP::SNMPv2_Trap:0x391f0422 @error_index=0, @varbind_list=[#<SNMP::VarBind:0x691212ae @value=#<SNMP::TimeTicks:0x21889366 @value=17030767>, @name=[1.3.6.1.2.1.1.3.0]>, #<SNMP::VarBind:0xd851ebd @value=[1.3.6.1.4.1.22420.2.14.0.0.1], @name=[1.3.6.1.6.3.1.1.4.1.0]>, #<SNMP::VarBind:0x1e67c664 @value=#<SNMP::Gauge32:0x2f11ded4 @value=661>, @name=[1.3.6.1.4.1.22420.2.14.1.3.2.0]>], @error_status=0, @request_id=907477978, @source_ip=\"10.91.140.99\">",
                                          "host" => "10.91.140.99",
                                      "@version" => "1",
                                    "@timestamp" => "2017-06-09T10:42:10.993Z",
                                          "type" => "snmptrap",
                       "SNMPv2-MIB::sysUpTime_0" => "1 day, 23:18:27.67",
                     "SNMPv2-MIB::snmpTrapOID_0" => "SNMPv2-SMI::enterprises.22420.2.14.0.0.1",
    "SNMPv2-SMI::enterprises_22420_2_14_1_3_2_0" => "661"
}

But these messages are not visible to Kibana. I do have similar setup for syslog which is working very well and I am able to see messages in Kibana.

here is my logstash syslog config:

input {
  tcp {
    port => 514
    type => syslog
  }
  udp {
    port => 514
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch { hosts => ["127.0.01:9200"] }
  stdout { codec => rubydebug }
}

and messages on stdout are like:

{
             "message" => "<174>Jun  9 15:18:51 10.91.142.100 Mediation: [ID 127899 local5.info] MESSAGE= Discovery in progress ; TARGET= 10.91.123.56; CONDITION_TYPE= NEMGMT511; USER= Manager_for_6k_OM5k_and_CPL ",
            "@version" => "1",
          "@timestamp" => "2017-06-09T09:48:51.000Z",
                "host" => "10.91.142.103",
                "type" => "syslog",
    "syslog_timestamp" => "Jun  9 15:18:51",
     "syslog_hostname" => "10.91.142.100",
      "syslog_program" => "Mediation",
      "syslog_message" => "[ID 127899 local5.info] MESSAGE= Discovery in progress ; TARGET= 10.91.123.56; CONDITION_TYPE= NEMGMT511; USER= Manager_for_6k_OM5k_and_CPL ",
         "received_at" => "2017-06-09T09:48:38.261Z",
       "received_from" => "10.91.142.103"
}

Please guide what can I do to fix the problem with SNMP traps. and what should I query in elastic search wrt to SNMP traps.

Regards,
-Manish

Hi Manish,

Thanks for posting this. What is Kibana set to query? Can we see the query?

Also, could you please check the Logstash and Elasticsearch logs? I'm wondering if documents are even being created for the SNMP trap events. I wonder if SNMPv2-MIB::sysUpTime_0 will be accepted by ES as a valid name.

Thanks,
CJ

Hi CJ, Thanks for throwing some light on the problem. I am very new to ELK
world :slight_smile:

I dont see any complains in logstash.log wrt to SNMP trap messages.

Here is the snip of logs file

root@deb0:/var/log/logstash# tail -f logstash.log
{:timestamp=>"2017-06-09T16:25:00.471000+0530", :message=>"SIGINT received.
Shutting down the pipeline.", :level=>:warn}
{:timestamp=>"2017-06-09T16:25:00.496000+0530", :message=>"Pipeline
shutdown complete.", :level=>:info}
{:timestamp=>"2017-06-09T16:28:40.577000+0530", :message=>"Worker threads
expected: 2, worker threads started: 2", :level=>:info}
{:timestamp=>"2017-06-09T16:28:40.577000+0530", :message=>"It's a Trap!",
:Port=>162, :Community=>["public"], :Host=>"0.0.0.0", :level=>:info}
{:timestamp=>"2017-06-09T16:28:40.608000+0530", :message=>"Automatic
template management enabled", :manage_template=>"true", :level=>:info}
{:timestamp=>"2017-06-09T16:28:41.089000+0530", :message=>"Using mapping
template", :template=>{"template"=>"logstash-",
"settings"=>{"index.refresh_interval"=>"5s"},
"mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true},
"dynamic_templates"=>[{"message_field"=>{"match"=>"message",
"match_mapping_type"=>"string", "mapping"=>{"type"=>"string",
"index"=>"analyzed", "omit_norms"=>true}}},
{"string_fields"=>{"match"=>"
", "match_mapping_type"=>"string",
"mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true,
"fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed",
"ignore_above"=>256}}}}}], "properties"=>{"@version"=>{"type"=>"string",
"index"=>"not_analyzed"}, "geoip"=>{"type"=>"object", "dynamic"=>true,
"properties"=>{"location"=>{"type"=>"geo_point"}}}}}}}, :level=>:info}

I am still getting proper output on stdout from logstash

{
"message" =>
"#<SNMP::SNMPv2_Trap:0x54aaa2ff @error_index=0,
@varbind_list=[#<SNMP::VarBind:0x19de40b4
@value=#<SNMP::TimeTicks:0x4e69f6b0 @value=22541758>,
@name=[1.3.6.1.2.1.1.3.0]>, #<SNMP::VarBind:0xec2c508
@value=[1.3.6.1.4.1.22420.2.14.0.0.1], @name=[1.3.6.1.6.3.1.1.4.1.0]>,
#<SNMP::VarBind:0x151597ca @value=#<SNMP::Gauge32:0x3ca05996 @value=828>,
@name=[1.3.6.1.4.1.22420.2.14.1.3.2.0]>], @error_status=0,
@request_id=909307654, @source_ip="10.91.140.99">",
"host" => "10.91.140.99",
"@version" => "1",
"@timestamp" =>
"2017-06-10T02:00:38.878Z",
"type" => "snmptrap",
"SNMPv2-MIB::sysUpTime_0" => "2 days, 14:36:57.58",
"SNMPv2-MIB::snmpTrapOID_0" =>
"SNMPv2-SMI::enterprises.22420.2.14.0.0.1",
"SNMPv2-SMI::enterprises_22420_2_14_1_3_2_0" => "828"
}

While in the elasticsearch logs was getting following: After this I
followed the changes in filter mentioned at

and this error has disappeared now.

[2017-06-09 14:11:08,863][DEBUG][action.bulk ] [Doughboy]
[logstash-2017.06.09][3] failed to execute bulk item (index) index
{[logstash-2017.06.09][snmptra
p][AVyMA6f4lB7dSLba9Y7f], source[{"message":"#<SNMP::SNMPv2_Trap:0x72749130
@error_index=0, @varbind_list=[#<SNMP::VarBind:0x15cfafed
@value=#<SNMP::TimeTicks:0x7626fbc
a @value=16304519>, @name=[1.3.6.1.2.1.1.3.0]>, #<SNMP::VarBind:0x795cc75f
@value=[1.3.6.1.4.1.22420.2.14.0.0.1], @name=[1.3.6.1.6.3.1.1.4.1.0]>,
#<SNMP::VarBind:0x3a25
6c38 @value=#<SNMP::Gauge32:0x6ee3ee5a @value=628>,
@name=[1.3.6.1.4.1.22420.2.14.1.3.2.0]>], @error_status=0,
@request_id=907236719, @source_ip="10.91.140.99">","hos
t":"10.91.140.99","@version":"1","@timestamp":"2017-06-09T08:41:08.778Z","type":"snmptrap","SNMPv2-MIB::sysUpTime.0":"1
day, 21:17:25.19","SNMPv2-MIB::snmpTrapOID.0":"S
NMPv2-SMI::enterprises.22420.2.14.0.0.1","SNMPv2-SMI::enterprises.22420.2.14.1.3.2.0":"628"}]}
MapperParsingException[Field name [SNMPv2-MIB::snmpTrapOID.0] cannot
contain '.']
at
org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:277)
at
org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:222)
at
org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
at
org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)
at
org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)
at
org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:549)
at
org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
at
org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)
at
org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:480)
at
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:784)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Regards,
-Manish

Hi CJ,

There were two problems

  1. "." was not permitted in fields
  2. when I enabled received_at time field in syslog logstash filter and indexed using received_at field. SNMP stopped working in kibana.

Now I am using @timestamp in syslog and SNMP and now both SNMP and syslog are looking good in kibana. and replacing "." with "_" in SNMP filter.

here are the final configs:

Syslog:

input {
tcp {
port => 514
type => syslog
}
udp {
port => 514
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_from", "%{host}" ]
}
}
}

output {
elasticsearch { hosts => ["127.0.01:9200"] }
stdout { codec => rubydebug }

SNMP config:

input {
snmptrap {
type => "snmptrap"
host => "0.0.0.0"
port => 162
yamlmibdir => "/opt/logstash/vendor/bundle/jruby/1.9/gems/snmp-1.2.0/data/ruby/snmp/mibs"
}
}

filter{
if [type] == "snmptrap" {
ruby {
code => "
event.to_hash.keys.each { |k| event[ k.gsub('.','_') ] = event.remove(k) if k.include?'.' }
"
}
}
}

output {
elasticsearch { hosts => ["127.0.0.1:9200"] }
stdout { codec => rubydebug }
}

I had also installed smitools package. Not sure if it helped.

Regards,
-Manish

Hi Manish, I'm so happy to hear you were able to solve your problem! Thanks for sharing the solution. I'll forward this information onto the Logstash team.

CJ

By the way, I just spoke with @jordansissel and he mentioned that upgrading to a newer version of Elasticsearch can solve your problem with using periods in field names. He also guessed that your second problem might be a mapping issue but it's hard to say without digging deeper.

CJ

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.