Logstash keeps trying configuration that's already deleted

Hi,

I'm getting into this issue where logstash keeps trying to send events to a syslog server even after I've deleted the output plugin for syslog server.

I've set logstash to look for new config every 10 seconds. But once it gets into this state, it no longer reads the new configuration.

Here are the errors:

2017-05-03T22:39:13.782373260Z 22:39:13.781 [Ruby-0-Thread-57: /usr/share/logstash/logstash-core/lib/logstash/shutdown_watcher.rb:31] WARN  logstash.shutdownwatcher - {"inflight_count"=>11, "stalling_thread_info"=>{["LogStash::Filters::Mutate", {"rename"=>{"[kubernetes][container_name]"=>"service"}, "id"=>"ab07e67fc14c55cff2fe6ec4b5337d7acd85ede1-19"}]=>[{"thread_id"=>72, "name"=>"[main]>worker4", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.1/lib/logstash/outputs/syslog.rb:188:in `sleep'"}]}}
2017-05-03T22:39:14.084995982Z 22:39:14.084 [[main]>worker4] WARN  logstash.outputs.syslog - syslog tcp output exception: closing, reconnecting and resending event {:host=>"1.1.1.1", :port=>514, :exception=>#<Errno::ECONNREFUSED: Connection refused - Connection refused>, :backtrace=>["org/jruby/ext/socket/RubyTCPSocket.java:126:in `initialize'", "org/jruby/RubyIO.java:871:in `new'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.1/lib/logstash/outputs/syslog.rb:209:in `connect'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.1/lib/logstash/outputs/syslog.rb:177:in `publish'", "org/jruby/RubyProc.java:281:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-plain-3.0.2/lib/logstash/codecs/plain.rb:41:in `encode'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.1/lib/logstash/outputs/syslog.rb:147:in `receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:414:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:413:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:371:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:331:in `start_workers'"], :event=>2017-05-03T22:32:35.885Z maglev-master-1 %{message}}
2017-05-03T22:39:15.094351359Z 22:39:15.093 [[main]>worker4] WARN  logstash.outputs.syslog - syslog tcp output exception: closing, reconnecting and resending event {:host=>"1.1.1.1", :port=>514, :exception=>#<Errno::ECONNREFUSED: Connection refused - Connection refused>, :backtrace=>["org/jruby/ext/socket/RubyTCPSocket.java:126:in `initialize'", "org/jruby/RubyIO.java:871:in `new'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.1/lib/logstash/outputs/syslog.rb:209:in `connect'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.1/lib/logstash/outputs/syslog.rb:177:in `publish'", "org/jruby/RubyProc.java:281:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-plain-3.0.2/lib/logstash/codecs/plain.rb:41:in `encode'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.1/lib/logstash/outputs/syslog.rb:147:in `receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:414:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:413:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:371:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:331:in `start_workers'"], :event=>2017-05-03T22:32:35.885Z maglev-master-1 %{message}}

Configuration for syslog:

output {
    # syslog output configuration for 1.1.1.1:514:udp
    syslog {
        host => "1.1.1.1"
        port => "514"
        protocol => "tcp"
        rfc => "rfc5424"
    }
}

Everything works fine when the syslog server is reachable. I'm running logstash 5.2.2.

Any ideas anyone?

Hello @ronakg

I'm not sure that I'm understanding your issue.

You say that your config sends output to syslog and that LS is configured to reload config every 10 seconds.

And that when syslog is not reachable it complains about this. This sounds fine to me.

Or do you mean that if on this situation you change LS configuration (by commenting syslog output, for instance) it still tries to send output through syslog?

I'm getting into this issue where logstash keeps trying to send events to a syslog server even after I've deleted the output plugin for syslog server.

Do you have a backup file or similar hanging around in /etc/logstash/conf.d? Logstash reads all files in that directory (or wherever you've configured it to look).

Okay let me try to elaborate.

We have a script which generates the logstash configuration. There's only one configuration file. Whenever user changes some configuration (for example adds a new syslog server), the script regenerates the entire logstash configuration file.

Now let's say a user added a syslog server that doesn't exist. Logstash does retries till the syslog server is reachable, which is totally expected.

But now user realizes that the syslog server is wrong, so she deletes the syslog server and adds a new one. This change is reflected in the logstash configuration file. At this point logstash is not picking up the new configuration file and it keeps retrying the old syslog server.

I'm seeing this error in the logs now. Does this mean that there are inflight events which are not yet delivered to output and hence it keeps trying the output even after the configuration is deleted?

If that's the case, is it possible to tell logstash to ignore the inflight events and stop trying the deleted output?

2017-05-17T04:58:57.517061973Z 04:58:57.516 [Ruby-0-Thread-98: /usr/share/logstash/logstash-core/lib/logstash/shutdown_watcher.rb:31] WARN  logstash.shutdownwatcher - {"inflight_count"=>153, "stall
ing_thread_info"=>{["LogStash::Filters::Mutate", {"rename"=>{"[kubernetes][container_name]"=>"service"}, "id"=>"4e7dd62eb6ff37cd48da906d4ae7e4ab54f713a3-19"}]=>[{"thread_id"=>111, "name"=>"[main]>w
orker3", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.1/lib/logstash/outputs/syslog.rb:188:in `sleep'"}, {"thread_id"=>114, "name"=>"[main]>worker6", "current_call
"=>"[...]/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:18:in `pop'"}]}}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.