Logstash Batch Size/Workers log message

Is there a way to work around the message below I am seeing in my logs? I currently working with Logstash 2.2 and am trying to improve the performance of the number of events running through the pipeline. Currently am using Redis as my input (threads => 10 and batch_count => 1000) and Elasticsearch as my output (flush_size 5000). My current events per second is between 4500/sec - 5000/sec. I am trying to increase the number of events processed by Redis queue.

{:timestamp=>"2016-03-01T16:01:25.001000-0500", :message=>"CAUTION: Recommended inflight events max exceeded! Logstash will run with up to 15000 events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently 1000), or changing the number of pipeline workers (currently 15)", :level=>:warn}

I'd reduce the ES batch size, larger is not always better in this instance.

Hello warkolm

Where can i modify the ES batch size ?

https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-flush_size

Hi Warkolm

i read the doc but i did not find wher modify th ES batch size could you help me ?

How many workers do you have configured for the Elasticsearch output? How many workers is Logstash running with?

2048 for each.

2048?? That seems extremely excessive and inefficient. I believe good starting point is to set it to the number of cores on the host, and then potentially slowly increase until you see no further improvement in throughput. That should land you at a considerably smaller number...

I would also recommend starting with a smaller batch and flush size, e.g. 1000, and then gradually increase this as long as it improves throughput.

1 Like

Hello

More i improve the workers on logstash and in output elasticsearch more i send data.

here my filebeat log :slight_smile:
2016-10-06T18:02:43+02:00 INFO Events sent: 2048
2016-10-06T18:02:43+02:00 INFO Registry file updated. 2 states written.
2016-10-06T18:04:05+02:00 INFO Events sent: 2048
2016-10-06T18:04:05+02:00 INFO Registry file updated. 2 states written.
2016-10-06T18:06:09+02:00 INFO Events sent: 2048
2016-10-06T18:06:09+02:00 INFO Registry file updated. 2 states written.
2016-10-06T18:07:02+02:00 INFO Events sent: 2048
2016-10-06T18:07:02+02:00 INFO Registry file updated. 2 states written.
2016-10-06T18:08:13+02:00 INFO Events sent: 2048
2016-10-06T18:08:13+02:00 INFO Registry file updated. 2 states written.

Here my data flux on logstash server.

Are you talking about batch size or actual input and filter worker threads? Can you share the Logstash config?

Actually @clement, please make a new thread :slight_smile:

/etc/sysconfig/logstash
LS_HOME=/var/lib/logstash
LS_OPTS="-w 8"
LS_OPEN_FILES=163840
LS_NICE=19
KILL_ON_STOP_TIMEOUT=0

/etc/logstash/conf.d/output_elasticsearch.conf
output {
elasticsearch {
hosts => ["node1:9200","node2:9200","node3:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
flush_size => 500000
}
}

OK, so you are using 8 worker threads, not 20148. That is much kore reasonable. The flush size is however excessive and most likely inefficient. I would recommend lowering it to 1000 or 5000 and setting the workers parameter in the Elasticsearch output plugin to 8 as well in order to use more connections to Elasticsearch in parallel. At the moment you are just sending very large bulk requests across a single connection, which is not very efficient. Start with that as a baseline and then gradually tune the batch size until you see no further improvement in throughput.

Hi

I modified my conf like this

flush_size => 5000
workers => 8

but my transfer steel slow for applicatives log :slight_smile:
2016-10-07T13:54:17+02:00 INFO Registry file updated. 2 states written.
2016-10-07T13:56:26+02:00 INFO Events sent: 2048
2016-10-07T13:56:26+02:00 INFO Registry file updated. 2 states written.
2016-10-07T13:58:43+02:00 INFO Events sent: 2048
2016-10-07T13:58:43+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:01:03+02:00 INFO Events sent: 2048
2016-10-07T14:01:03+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:03:16+02:00 INFO Events sent: 2048
2016-10-07T14:03:16+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:05:41+02:00 INFO Events sent: 2048
2016-10-07T14:05:41+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:07:43+02:00 INFO Events sent: 2048
2016-10-07T14:07:43+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:09:34+02:00 INFO Events sent: 2048
2016-10-07T14:09:34+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:11:29+02:00 INFO Events sent: 2048
2016-10-07T14:11:29+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:13:40+02:00 INFO Events sent: 2048
2016-10-07T14:13:40+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:15:43+02:00 INFO Events sent: 2048
2016-10-07T14:15:43+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:17:49+02:00 INFO Events sent: 2048
2016-10-07T14:17:49+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:19:48+02:00 INFO Events sent: 2048
2016-10-07T14:19:48+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:21:33+02:00 INFO Events sent: 2048
2016-10-07T14:21:33+02:00 INFO Registry file updated. 2 states written.
2016-10-07T14:23:59+02:00 INFO Events sent: 2048
2016-10-07T14:23:59+02:00 INFO Registry file updated. 2 states written.