Logstash 2.3.x throughput drops significantly compared to 2.2.0

Hi everyone,

Does anyone experience throughput drop in Logstash 2.3.x ( 2.3.0, 2.3.1, 2.3.3) compared to 2.2.0. I'm using Logstash to receive, process, and send IIS logs to Elasticsearch. On the same physical server with 1 Logstash instance with same Logstash config and same number of incoming events,

  • Logstash 2.3.x sent 1500 EPS to ES
  • Logstash 2.2.0 sent 6000 EPS to ES

I repeated the test multiple times and also got similar issue when indexing netflow. The Logstash config is very simple as below

input {	
	tcp {
		port => 5544
		codec => "json"

filter {

	#--------------------------- IISLogs Filters
	# Convert IIS log time stamp to local time and send to @timestamp field
	date {
		match => ["GMTTime", "yyyy-MM-dd HH:mm:ss"]
		timezone => "Etc/GMT"
	# Parse user web browser
	useragent {
		source => "cs(User-Agent)"
		target => "user-agent"
	# IIS Log client ip Geo info
	geoip {
		source => "c-ip"
		target => "client_geoip"
		fields => ["country_name", "real_region_name", "city_name", "location"]
	geoip {
		source => "realip"
		target => "realip_geoip"
		fields => ["country_name", "real_region_name", "city_name", "location"]
	#--------------------------- End IISLogs Filters
	#--------------------------- Perfmon Filters
	date {
		match => ["EventTime", "MM/dd/yyyy HH:mm:ss.SSS"]
		timezone => "America/Los_Angeles"
		locale => "en"
    #--------------------------- End Perfmon Filters
	## Remove redundant fields
	mutate {	
		remove_field => ["@version", "host", "port", "EventTime", "cs(Cookie)", "sc-substatus", "sc-win32-status", "[user-agent][minor]", "[user-agent][os]", "[user-agent][patch]", "cs-version", "cs(User-Agent)", "cs-username"]
	## Drop events if tags = "_jsonparsefailure"
	if "_jsonparsefailure" in [tags] {
		drop {}

Is there a way to troubleshoot Logstash performance?

Btw, is it safe to increase LS_HEAP_SIZE from the default 1G to 2 or 4G?

Just reply in case someone has similar issue as I do.

Same config and command line arguments worked fine for 2.2.0, but in LS 2.3.x I have to increase batch size in LS to 1000 or 1500 and increase the elasticsearch-output-plugins workers to the number of LS filter workers (both 24 in my case) and flush_size to 1500. Increasing elasticsearch output workers may not be necessary, but after that everything works fine.

The default number of output workers in Logstash 2.2 is now equal to the number of pipeline workers (-w) unless overridden in the Logstash config file.

Do output workers also equal the number of TCP connections between LS and ES? If yes, then in LS 2.3.3 if I leave workers as default, there are 3 TCP connections between LS to each ES node even though the pipeline workers are 24.