Syslog output doesn't support dynamic host


#1

Hi,

I have to route alerts from remote sites back to their syslog servers.
For that, I parse the alert event and based on its' value, I add metadata field:
if [_source][site] == "new york" { mutate { add_field => { "[@metadata][remotesite]" => "ny.remote.site" } } }

I have verified with rubydebug that this will generate the metadata field correctly:

{ "@timestamp" => 2017-09-04T07:17:09.152Z, "_index" => "company-sonicwall-2017.09.04", "@metadata" => { "remotesite" => "ny.remote.site", "appname" => "SonicwallFailedLogin" }, "_type" => "sonicwall", "_source" => { ..... REMOVED ...... }, "_id" => "sdfjksdhlkrh32we", "_score" => 1.0, "type" => "alarm" }

I have configured syslog output like this:
syslog { appname => "%{[@metadata][appname]}" host => "%{[@metadata][remotesite]}" message => "Login alert etc." port => 514 priority => "<129>" protocol => "tcp" rfc => "rfc5424" severity => "alarm" sourcehost => "%{[_source][host]}" } }

I get this error when it tries to send the event to syslog server:
[2017-09-04T10:39:38,017][WARN ][logstash.outputs.syslog ] syslog tcp output exception: closing, reconnecting and resending event {:host=>"%{[@metadata][remotesite]}", :port=>514, :exception=>#<SocketError: initialize: name or service not known>, :backtrace=>["org/jruby/ext/socket/RubyTCPSocket.java:129:ininitialize'", "org/jruby/RubyIO.java:871:in new'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.2/lib/logstash/outputs/syslog.rb:210:inconnect'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.2/lib/logstash/outputs/syslog.rb:178:in publish'", "org/jruby/RubyProc.java:281:incall'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-plain-3.0.3/lib/logstash/codecs/plain.rb:41:in encode'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.2/lib/logstash/outputs/syslog.rb:147:inreceive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in multi_receive'", "org/jruby/RubyArray.java:1613:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:22:inmulti_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:47:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:420:inoutput_batch'", "org/jruby/RubyHash.java:1342:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:419:inoutput_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:365:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:330:instart_workers'"], :event=>2017-09-04T07:39:27.906Z %{host} %{message}}`

So it seems that the host field doesn't support dynamic field-value replacement?
I have verified that the sending host can reach the remote syslog server and the domain name resolves correctly.

Any ideas?


(Guy Boertje) #2

You suspect correctly, the syslog output does not support dynamic fields.

Your only option is to use an if block in the output section to steer the event to one of many hard coded host outputs.


#3

Thank you for your reply @guyboertje.

Yeah, that's what I am doing currently, but we are expecting to get a lot more remote sites, so it is not feasible solution in a long run.

What are my options here, I mean is it community supported plugin, should I post this in github issues, is it a real issue to use dynamic field as a host or just a design decision or ?


(Guy Boertje) #4

In the current code, the connection is made once for the @host and reused.

To properly implement a dynamic solution the code would need to cache the connection in a LRU cache with a disconnect on cache eviction.

There may be a security issue as well with dynamic host names. This may be lessened if we only allow the variable portion to come from a @metadata field.

It is unlikely that we will make this change in the near term.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.