Hello everybody,
I have been looking for a solution at my problem for days but without any success.
I tried upgrading to the newly 5.0 stack, setting up 1 VM for every level ( logstash, elastic, kibana), tried to change configuration as well but nothing seems to work.
Basically, I want to syslog drain log from Heroku (https://devcenter.heroku.com/articles/log-drains), send it to my logstash, process it and then send it to my elastic. Everything is working fine for a while until I got this error (which happens once a day) and I need to restart manually in order for logstash to be back running. When this error happens, my CPU and memory go very high (almost 100%) but are low on a normal use.
Here is my error :
[2016-10-31T10:51:59,693][ERROR][logstash.instrument.periodicpoller.jvm] PeriodicPoller: exception {:poller=>#<LogStash::Instrument::PeriodicPoller::JVM:0x263e02b5 @task=#<Concurrent::TimerTask:0x35d23145 @observers=#<Concurrent::Collection::CopyOnNotifyObserverSet:0x46c55984 @observers={#<LogStash::Instrument::PeriodicPoller::JVM:0x263e02b5 ...>=>:update}>, @timeout_interval=60.0, @running=#<Concurrent::AtomicBoolean:0x4da42c1b>, @StoppedEvent=#<Concurrent::Event:0x69d874f3 @set=false, @iteration=0>, @execution_interval=1.0, @do_nothing_on_deref=true, @run_now=nil, @freeze_on_deref=nil, @executor=#<Concurrent::SafeTaskExecutor:0x53d1935 @task=#<Proc:0x759820b5@/usr/share/logstash/logstash-core/lib/logstash/instrument/periodic_poller/base.rb:52>, @exception_class=StandardError>, @StopEvent=#<Concurrent::Event:0x19b00ee9 @set=false, @iteration=0>, @value=nil, @copy_on_deref=nil, @dup_on_deref=nil>, @peak_threads=1030, @peak_open_fds=4095, @metric=#<LogStash::Instrument::Metric:0x33e59ec0 @collector=#<LogStash::Instrument::Collector:0x10e9bbac @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x5efcdd38 @store=#<Concurrent::Map:0x6a55d3da @default_proc=nil>, @structured_lookup_mutex=#<Mutex:0x730b4f3f>, @fast_lookup=#<Concurrent::Map:0x6c07cc00 @default_proc=nil>>, @observer_state=false, @snapshot_task=#<Concurrent::TimerTask:0x542ac9a5 @observers=#<Concurrent::Collection::CopyOnNotifyObserverSet:0x10a50bf9 @observers={#<LogStash::Instrument::Collector:0x10e9bbac ...>=>:update}>, @timeout_interval=600.0, @running=#<Concurrent::AtomicBoolean:0x6bcb6096>, @StoppedEvent=#<Concurrent::Event:0x69aeebab @set=false, @iteration=0>, @execution_interval=1.0, @do_nothing_on_deref=true, @run_now=nil, @freeze_on_deref=nil, @executor=#<Concurrent::SafeTaskExecutor:0x516cdfa4 @task=#<Proc:0x1e3eac08@/usr/share/logstash/logstash-core/lib/logstash/instrument/collector.rb:87>, @exception_class=StandardError>, @StopEvent=#<Concurrent::Event:0x2ec1a089 @set=false, @iteration=0>, @value=false, @copy_on_deref=nil, @dup_on_deref=nil>>>, @options={:polling_interval=>1, :polling_timeout=>60}>, :result=>nil, :exception=>#<Concurrent::TimeoutError: Concurrent::TimeoutError>, :executed_at=>2016-10-31 10:51:59 +0000}
[2016-10-31T10:51:59,721][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Tcp type=>"heroku", port=>1514, codec=><LogStash::Codecs::Plain id=>"plain_7d17f778-6aa6-47f4-93f4-e97f904aca21", enable_metric=>true, charset=>"UTF-8">, ssl_enable=>true, ssl_verify=>false, ssl_cert=>"/etc/pki/tls/certs/logstash-forwarder.crt", ssl_key=>"/etc/pki/tls/private/logstash-forwarder.key", id=>"3d22eb2595295c0b2bedecb3456ee342ca36f418-1", enable_metric=>true, host=>"0.0.0.0", data_timeout=>-1, mode=>"server", ssl_key_passphrase=><password>>
Error: closed stream
My setup is
- Ubuntu 16.04
- openjdk version "1.8.0_91"
- ELK version 5.0
Logstash input config file (I tried with codec line as well):
input {
tcp {
type => "heroku"
port => 1514
codec => "plain"
ssl_enable => true
ssl_verify => false
ssl_cert => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
Did someone had a similar issue before ?
Thanks for your help