Hello. I'm using an Winlogbeat/filebeat -> logstash(linux) -> ES(linux) environment
where my logstash process reaches a specific number of connected beats clients
and starts to drop new connections. Example from actual logstash host on
localhost bellow:
$ time telnet 127.0.0.1 5043
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
real 0m0.017s
user 0m0.001s
sys 0m0.005s
The magic number seems to be around 380 connected winlogbeat and filebeat clients.
When this number is reached, new and even new local connection on my beats port
starts dropping new connections. Looks like already connected clients are staying
connected.
I've tried to split the connected clients in two different beats pools on the same
logstash installation: on port 5043 and 5044. The totalt number of connected clients
before both pools starts giving the "Connection closed"-error is roughly the same.
I'm running logstash 2.4.0 with:
$ java -version
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
I'm using: LS_OPTS="--pipeline-workers 16" and LS_HEAP_SIZE="8g".
I'm running this on Redhat 7.2 and it seems to have no obvious resource
problems. Logstash dosn't log anything after the famous "Pipeline main started".
Any other Java settings I could try to increase the max connections to
the beast plugin?
Edit: I'm getting lots of :method=>"interval_flush"} log lines when running logstsh
in debug mode. Are these potential problem sources?
grep interval_flush /var/log/logstash/logstash.log | wc -l
1628
grep -v interval_flush /var/log/logstash/logstash.log | wc -l
404197
{:timestamp=>"2016-09-15T13:24:45.737000+0200", :message=>"Flushing buffer at interval", :instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x2c48b6a2 @operations_mutex=#Mutex:0x228f4d5b, @max_size=500, @operations_lock=#Java::JavaUtilConcurrentLocks::ReentrantLock:0x26b0ddd2, @submit_proc=#Proc:0xa260012@/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:57, @logger=#<Cabin::Channel:0x521b6198 @metrics=#<Cabin::Metrics:0x72657844 @metrics_lock=#Mutex:0x39db82f, @metrics={}, @channel=#<Cabin::Channel:0x521b6198 ...>>, @subscriber_lock=#Mutex:0x39010ede, @level=:debug, @subscribers={13206=>#<Cabin::Subscriber:0x696156c8 @output=#<Cabin::Outputs::IO:0x7d3c70b @io=#<File:/var/log/logstash/logstash.log>, @lock=#Mutex:0x36155ed1>, @options={}>, 13208=>#<Cabin::Subscriber:0x7b392e43 @output=#<Cabin::Outputs::IO:0x188df0fa @io=#<IO:fd 1>, @lock=#Mutex:0x17f977f>, @options={:level=>:fatal}>}, @data={}>, @last_flush=2016-09-15 13:24:44 +0200, @flush_interval=1, @stopping=#Concurrent::AtomicBoolean:0x2f79cc82, @buffer=[], @flush_thread=#<Thread:0x5884e3b7 run>>", :interval=>1, :level=>:debug, :file=>"logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}