Logstash snmptrap

Hello everyone,

I enabled receiving SNMP traps as input on my Logstash server with the following addition to the configuration:

input {
  snmptrap {
    id => "snmptrap"
  }
}

I am sending an SNMP trap to the server. I can see that the message is received by the server itself when I sniff the network traffic on port 1062 (default), but the Logstash service running on the server does not log the message, and there is no indication in the log that a message was received.

What could be my mistake?

You need to share the rest of your logstash configuration to see if there is any error.

Logstash does not log when it receives a message or no as this would flood the logs, it would log just errors and warnings on this case.

My Logstash configuration:

input {
  beats { 
    port => 5044
  }
  http {
    port => 5045
  }
  snmptrap {
    id => "logstash-snmp"
    host => "0.0.0.0"
    port => "1062"
  }
}

output {
  elasticsearch {
    host => ...
    username => ...
    password => ...
    index => ...
  }
  file {
    path => ...
    codec => "line"
  }
}

It's important to note that the Logstash successfully writes data it receives from other sources to Elasticsearch and the file. There are no errors related to SNMP in the log during the server startup.
And the following line appears:
It's a Trap! { :Port=>1062, Community: ["public"], :Host=>"0.0.0.0" }

Any reason to have different inputs on the same pipeline instead of using multiple pipelines?

This is not a good approach as it can leads to many issues since the data from each input can be different.

Besides that, I do not see anything wrong in the pipeline that would drop data.

You have no warn/error logs in Logstash?

Can you share the tcpdump/network capture on port 1062 showing the messages you are receiving on port 1062?

Of course, I split each output into a separate pipeline. This was for POC purposes only.
The output of the sniffer on port 1062 with tcpdump during the sending of SNMP traps:

timestamp IP myIP > logstashIP: UDP, length 85

@ leandrojmp
Do you know what the problem might be?

No idea, you didn't share the logstash logs or the packet capture as asked:

You have no warn/error logs in Logstash?
Can you share the tcpdump/network capture on port 1062 showing the messages you are receiving on port 1062?

You need to share the logs you have in Logstash and also some real messages of the capture when you send snmp events to the logstash port.

The log of my Logstash server:

[2024-06-24T08:06:35,194][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2024-06-24T08:06:35,199][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.11.3", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.9+9 on 17.0.9+9 +indy +jit [x86_64-linux]"}
[2024-06-24T08:06:35,201][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2024-06-24T08:06:35,998][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>---, :ssl_enabled=>false}
[2024-06-24T08:06:36,279][INFO ][org.reflections.Reflections] Reflections took 129 ms to scan 1 urls, producing 131 keys and 463 values
[2024-06-24T08:06:36,545][INFO ][logstash.codecs.json     ] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2024-06-24T08:06:36,572][INFO ][logstash.javapipeline    ] Pipeline `snmp_pipeline` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-06-24T08:06:36,632][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][snmp_pipeline] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2024-06-24T08:06:36,674][INFO ][logstash.javapipeline    ][snmp_pipeline] Starting pipeline {:pipeline_id=>"snmp_pipeline", "pipeline.workers"=>2, "pipeline.batch.size"=>100, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>200, "pipeline.sources"=>["/etc/logstash/conf.d/snmp_pipeline.conf"], :thread=>"#<Thread:0x5f3cae00 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-06-24T08:06:36,690][INFO ][logstash.javapipeline    ] Pipeline `rest_pipeline` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-06-24T08:06:36,700][INFO ][logstash.javapipeline    ] Pipeline `metrics_pipeline` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-06-24T08:06:36,707][INFO ][logstash.javapipeline    ] Pipeline `syslog_pipeline` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-06-24T08:06:36,707][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][rest_pipeline] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2024-06-24T08:06:36,720][INFO ][logstash.javapipeline    ][rest_pipeline] Starting pipeline {:pipeline_id=>"rest_pipeline", "pipeline.workers"=>2, "pipeline.batch.size"=>100, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>200, "pipeline.sources"=>["/etc/logstash/conf.d/rest_pipeline.conf"], :thread=>"#<Thread:0x1956fa00 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-06-24T08:06:36,796][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][syslog_pipeline] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2024-06-24T08:06:36,814][INFO ][logstash.javapipeline    ][syslog_pipeline] Starting pipeline {:pipeline_id=>"syslog_pipeline", "pipeline.workers"=>2, "pipeline.batch.size"=>100, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>200, "pipeline.sources"=>["/etc/logstash/conf.d/syslog_pipeline.conf"], :thread=>"#<Thread:0x3e609d4e /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-06-24T08:06:36,822][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][metrics_pipeline] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2024-06-24T08:06:36,827][INFO ][logstash.javapipeline    ][metrics_pipeline] Starting pipeline {:pipeline_id=>"metrics_pipeline", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/metrics_pipeline.conf"], :thread=>"#<Thread:0xda60d74 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-06-24T08:06:36,902][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "ssl" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Set 'ssl_enabled' instead. If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"ssl", :plugin=><LogStash::Outputs::ElasticSearch password=><password>, id=>"---", user=>"---", ssl=>false, hosts=>[---], data_stream=>"true", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"---", enable_metric=>true, charset=>"UTF-8">, workers=>1, ssl_certificate_verification=>true, ssl_verification_mode=>"full", sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>true, compression_level=>1, retry_initial_interval=>2, retry_max_interval=>64, dlq_on_failed_indexname_interpolation=>true, data_stream_type=>"logs", data_stream_dataset=>"generic", data_stream_namespace=>"default", data_stream_sync_fields=>true, data_stream_auto_routing=>true, manage_template=>true, template_overwrite=>false, template_api=>"auto", doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy">}
[2024-06-24T08:06:36,919][INFO ][logstash.javapipeline    ] Pipeline `output_pipeline` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-06-24T08:06:36,948][INFO ][logstash.outputs.elasticsearch][output_pipeline] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["---"]}
[2024-06-24T08:06:37,055][INFO ][logstash.outputs.elasticsearch][output_pipeline] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[---]}}
[2024-06-24T08:06:37,237][WARN ][logstash.outputs.elasticsearch][output_pipeline] Restored connection to ES instance {:url=>"---"}
[2024-06-24T08:06:37,237][INFO ][logstash.outputs.elasticsearch][output_pipeline] Elasticsearch version determined (8.11.3) {:es_version=>8}
[2024-06-24T08:06:37,238][WARN ][logstash.outputs.elasticsearch][output_pipeline] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-06-24T08:06:37,251][INFO ][logstash.outputs.elasticsearch][output_pipeline] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"---"}
[2024-06-24T08:06:37,252][INFO ][logstash.outputs.elasticsearch][output_pipeline] Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`
[2024-06-24T08:06:37,260][INFO ][logstash.outputs.elasticsearch][output_pipeline] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["---"]}
[2024-06-24T08:06:37,264][INFO ][logstash.outputs.elasticsearch][output_pipeline] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[---]}}
[2024-06-24T08:06:37,276][INFO ][logstash.outputs.elasticsearch][output_pipeline] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2024-06-24T08:06:37,301][WARN ][logstash.outputs.elasticsearch][output_pipeline] Restored connection to ES instance {:url=>"---"}
[2024-06-24T08:06:37,302][INFO ][logstash.outputs.elasticsearch][output_pipeline] Elasticsearch version determined (8.11.3) {:es_version=>8}
[2024-06-24T08:06:37,302][WARN ][logstash.outputs.elasticsearch][output_pipeline] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-06-24T08:06:37,395][INFO ][logstash.javapipeline    ][output_pipeline] Starting pipeline {:pipeline_id=>"output_pipeline", "pipeline.workers"=>5, "pipeline.batch.size"=>200, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/conf.d/output_pipeline.conf"], :thread=>"#<Thread:0x1cdaae41 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-06-24T08:06:37,458][INFO ][logstash.javapipeline    ][metrics_pipeline] Pipeline Java execution initialization time {"seconds"=>0.63}
[2024-06-24T08:06:37,459][INFO ][logstash.javapipeline    ][rest_pipeline] Pipeline Java execution initialization time {"seconds"=>0.74}
[2024-06-24T08:06:37,460][INFO ][logstash.javapipeline    ][syslog_pipeline] Pipeline Java execution initialization time {"seconds"=>0.65}
[2024-06-24T08:06:37,461][INFO ][logstash.javapipeline    ][snmp_pipeline] Pipeline Java execution initialization time {"seconds"=>0.79}
[2024-06-24T08:06:37,509][WARN ][logstash.filters.grok    ][syslog_pipeline] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2024-06-24T08:06:37,536][INFO ][logstash.inputs.snmptrap ][snmp_pipeline] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2024-06-24T08:06:37,548][INFO ][logstash.javapipeline    ][snmp_pipeline] Pipeline started {"pipeline.id"=>"snmp_pipeline"}
[2024-06-24T08:06:37,569][INFO ][logstash.codecs.json     ][rest_pipeline] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2024-06-24T08:06:37,569][INFO ][logstash.inputs.snmptrap ][snmp_pipeline][logstash-snmp] It's a Trap! {:Port=>---, :Community=>["public"], :Host=>"---"}
[2024-06-24T08:06:37,576][INFO ][logstash.inputs.beats    ][metrics_pipeline] Starting input listener {:address=>"---"}
[2024-06-24T08:06:37,676][INFO ][logstash.javapipeline    ][syslog_pipeline] Pipeline started {"pipeline.id"=>"syslog_pipeline"}
[2024-06-24T08:06:37,743][INFO ][logstash.inputs.syslog   ][syslog_pipeline][logstash-syslog] Starting syslog udp listener {:address=>"---"}
[2024-06-24T08:06:37,748][INFO ][logstash.inputs.syslog   ][syslog_pipeline][logstash-syslog] Starting syslog tcp listener {:address=>"---"}
[2024-06-24T08:06:37,809][INFO ][logstash.javapipeline    ][output_pipeline] Pipeline Java execution initialization time {"seconds"=>0.41}
[2024-06-24T08:06:37,840][INFO ][logstash.javapipeline    ][output_pipeline] Pipeline started {"pipeline.id"=>"output_pipeline"}
[2024-06-24T08:06:37,847][INFO ][logstash.inputs.http     ][rest_pipeline][---] Starting http input listener {:address=>"---", :ssl=>"false"}
[2024-06-24T08:06:37,847][INFO ][logstash.javapipeline    ][rest_pipeline] Pipeline started {"pipeline.id"=>"rest_pipeline"}
[2024-06-24T08:06:38,031][INFO ][logstash.javapipeline    ][metrics_pipeline] Pipeline started {"pipeline.id"=>"metrics_pipeline"}
[2024-06-24T08:06:38,034][INFO ][org.logstash.beats.Server][metrics_pipeline][---] Starting server on port: ---
[2024-06-24T08:06:38,047][INFO ][logstash.agent           ] Pipelines running {:count=>5, :running_pipelines=>[:snmp_pipeline, :rest_pipeline, :metrics_pipeline, :syslog_pipeline, :output_pipeline], :non_running_pipelines=>[]}

The output of the sniffer on port 1062 with tcpdump during the sending of SNMP traps:

timestamp IP myIP > logstashIP: UDP, length 85

Plese, do not redact relevant information, the address and port is relevant for the troubleshoot.

I will assume that this is 0.0.0.0 for the address and 1062 for the port, is that correct?

Where are the trap messages? You should have a trap message like the one in this example of the documentation.

If you have no messages like that, then there is nothing send snmp traps to this port.

From what you shared I see no issue on logstash side, it seems to be listening for traps on the specified port.

This log confirms that logstash is listening:

[2024-06-24T08:06:37,569][INFO ][logstash.inputs.snmptrap ][snmp_pipeline][logstash-snmp] It's a Trap! {:Port=>---, :Community=>["public"], :Host=>"---"}

Are you sure that the sender does not have any issues to communicate with Logstash and it is using the correct community when sending traps?

  1. Yes, the IP address was "0.0.0.0" and the port was "1062".

  2. From what I understand, tcpdump "interprets" packets when they arrive at a known port. When I send the Trap to port 162, it is displayed fully as expected, but that's only because it is a known and common port for SNMP Traps. When I send the exact same message to port "1062", it does not display the content.

How are you running tcpdump?

It should be something like this:

tcpdump -Ai <interface-name> udp port 1062

Where interface name is the name of the interface of your server, eth0 for example.

As mentioned, from what you shared there is nothing wrong on logstash side, it is listening without any issue.

If the data is not arriving, then the communication between the hosts need a troubleshoot.

Test the following, stop logstash or just the snmptrap pipeline, and use nc to listen to connections on the same port and protocol that logstash would use.

nc -u -l 1062

Then send the trap from the other server and check if it arrives or not.

Thank you for the response.
I am using tcpdump correctly. As I mentioned, I saw the messages being received on the server at port 1062 while Logstash was running.
Now, I did what you asked; I turned off the server and then ran the command:

nc -u -l 1062

In this case as well, the messages were received.
The messages are definitely being received on the server, but Logstash is "ignoring" them, and I don't understand why.

Yeah, not sure what is the issue here.

Can you confirm that the community is correct? Can you share the configuration of the sender of snmp data?

The SnmpListener will silently discard messages for communities it is not told to listen for. You could try community => '' to remove the community filtering.

If you enable --log.level debug then the input will log if it ever receives a trap (I expect this will not result in anything being logged with the as-is config).

1 Like