LS 5.4.3 w/ lumberjack input plugin - very slow processing of events


we're upgrading to using the latest logstash with lumberjack input plugin, but it seems to be processing events extremely slow compared to the old LS version (1.5.x) -

the setup is quite simple:

logstash-shipper (1.5.x, to be upgraded once indexers are) sends logs using lumberjack-output-plugin => logstash-indexer 5.4.3 w/ lumberjack-input-plugin, various filters are being applied => elasticsearch-output-plugin to ES 5.1.0

has anyone had similar experience??

i've already ruled out possible system resource starvation - cpu and memory are fine, hardly any cpu usage, memory is stable, no other network, cpu or memory related bottlenecks, we don't use disk queuing (only in-memory).


The Logstash shipper is no longer supported, we'd strongly suggest you move to using filebeat :slight_smile:

But filebeat doesn't support filters (last I looked), we need filters applied client-side before it's shipped off to be indexed... specifically add/remove tags and fields, add environmental information (logstash-filter-environment) and pattern matching of course...

What are others using to replace all that functionality ?


I think Mark might've been conflating the lumberjack plugins for Logstash with logstash-forwarder (previously known as Lumberjack).

What kind of event rates are you seeing?

Hi Magnus,

I'll collect some concrete numbers but what is really confusing me is that I don't see much CPU being used at all, it is as if something is slowing down the processing of incoming events.

When I "watched" the indexers, they were accepting something along the line of ~30/s when the "old" LS seems to have been doing something along the lines of ~600-900/s.

The rate at which LS 5.4.3 processes events seems to go up quickly after a start/restart but then drops back down to the usual ~100-150 events / 5s.

Is there anything we can "switch on" to see where in the pipeline(s) the bottleneck is? It certainly does not seem system resource related so it must be some tweaking on LS that I need to do to get old numbers back ..


Well, you could try bumping the log level to debug. I'm not sure it'll give you any more clues but it's worth a shot. Apart from that I'm not sure what's up. I've been using the lumberjack pair of plugins on Logstash 2.3.4 for a while and haven't noticed any slow processing. On the other hand I haven't measured the throughput, but I think I would've noticed 30 events/s anyway.

Is it then?

Hi Magnus,

so we did some more testing yesterday, and it may (or may not) have something to do with the/a AWS ELB that does a simple rr on tcp connections to the port the lumberjack-input is waiting for events, however, we weren't able to get that confirmed as even with just 1 host behind the elb, the rate didn't go past ~1000 events/s .. with the "old" (1.5.x) version we seem to be able to get about ~6-8000 events/s (calculated back from what we observe on logs ending up in logstash per minute / number of indexers) this works well with the ELB.

we changed the logging to debug & trace but there was little to nothing that would point us at what's wrong - we tweaked some settings and found that these seemed to be performing ok-ish (these are the settings where we were able to see ~1k events/s):

  • pipeline.workers: 4
  • pipeline.output.workers: 4
  • pipeline.batch.size: 1024
  • pipeline.batch.delay: 100

what seems very strange is that as soon as we add a 2nd node with identical configuration to the ELB, we're back at a very very low rate of events/s that are being processed..

we've decided to do some cleanup in the process of debugging this and are at this point going to drop lumberjack output + input all-together and are looking at using the http module(s) instead as http plays better with AWS ELBs (or any load balancer really) so that might either help or .... not :slight_smile: , we shall see.

if I have some time I'll do some more digging on what's happening when I put logstash "indexers" behind an ELB and why that would have such a negative effect ...


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.