Logstash 2.2.x kafka output large increase in broker tcp connections

Upgrading from logstash 2.1.1 to 2.2.2 and seeing a rough 10x increase in established tcp connections to brokers. Running in a large environment this is a problem. We want to use ls 2.2.x to take advantage of the better handling of 503s with elasticsearch output.

Testing env:

  • ubuntu trusty - oracle java 1.7.0_51 - logstash 2.2.2
  • one kafka output to one topic with 36 partitions on 6 brokers
  • config using bootstrap_servers, topic_id & compression_type = snappy
  • after a few mins logstash has 72 established tcp connections to kafka brokers

Same config with logstash 2.1.1 maintain ~15 tcp connections
Setting "message_key" to multiple test strings (not starting with digit) had not effect.
Noticeable change between ls 2.1.1 & 2.2.2 is upgrade from jruby-kafka 1.4.0 to 1.5.0

I would imagine this has something to do with the pipeline changes in 2.2
check out