What happens to events that are not processed by any output

I'm wondering what happens to events that are not written by any output plugin?
Reason why I'm asking is that I have a logstash instance running for a couple of days and for no reason (no increased load) it dies with an out of memory error (Java Heap).
Now the only reason I could see for that (apart from a memory leak) is that I'm not explicitly discarding (drop filter) events and only writing specific ones to an output ( i do have a if clause in my output section).

Would logstash keep them somewhere if not written to output, potentially causing a OOM?

Thanks,
R

All events after the output block go out of scope and are (should be) garbage collected by the Java VM and their memory should be freed up. This happens regardless of if they were sent (copied, in effect) to a downstream place.

Each output has its own retry mechanism - if an event is in an outputs retry queue then the event is still referenced and the JVM does not garbage collect it.

Outputs like elasticsearch will not accepts new events while the are events in the retry queue. This has a back-pressure effect that pushes back all the way to the inputs - they stop accepting new events from upstream.

Some codecs and filters have buffers in them, these accumulate data or events while waiting for some condition to occur that allows for the release of the buffered content in some way.

You may have a burst of very big events. Due to de-serialization or serialization, these events can, temporarily, double or triple the amount of memory they consume. Each pipeline has batch-size (125) * workers (cpu cores: 8) events in flight. If you have 1024 MB (1GB) memory assigned to the JVM, for 1000 (125 * 8) events in flight, each event can only be 1MB in size (in practice less, because LS uses memory too) so in practice 500K is too big.
Its not normal to have all 1000 in-flight events that large.

Thanks for confirming, that was about the behaviour I expected. Nevertheless I added an explicit drop() filter in my filter chain at the point where I can decide to drop events (earlier discard).
I also switched to persisted queues, and looking at the logstash stats API I find that I never get above 20% heap usage (at 2G heap size). Also the persisted queue size is rather low (at 50Mb page size and 250Mb overall size).
Lets see how that will work out...

Be aware though, drop only "cancels" the event to exclude it from further processing - it does not go out of scope and get GC'd earlier than the rest of the events in the batch. This is because, even though it is cancelled, it still needs to be acked in the persistent queue (last action taken on an event before it goes out of scope.

Interesting, thanks for the hint! So it just saves further processing cycles in the filterchain if dropped early.

Now that doesn't explain the behavior I'm seeing with the persistent queue:
logstash keeps running, the queue fills (checking via the monitoring API) and at some point logstash stalls, it's running but not processing anything anymore and not accepting new connections - I'm using a http input plugin and a protobuf codec decode.
At this point logstash (5.5.2) is dead and is not processing anymore, though it still responds to API calls. During the execution it created page files and rotated/GCed them (page.1 - page.5) so it seems events were ack'd. When it stalls the pagefile or queue is not even full.

Now I'm wondering if:
a) something in my filterchain prevents ACKing/GCing of events (leading to OOM or the stalled processing/back-pressure to input)
b) how a persistent queue will work if the input is using codec protobuf over http and then immediately splits the message into multiple smaller ones where some get dropped and some not. How would logstash decide to ACK the original message (which is a batch of the splitted and individually processed ones)
c) if I should go with a staged processing instead. 1stage with just decoding and splitting (no persisted queues) and output to another logstash instance (with persisted queues) for processing of splitted events.

Thanks for your thoughts!
Reinhard

With the PQ (persistent queue) in place, you should think of the inputs as being completely cut off from the filters + outputs.

In theory, one could have fire up an input with a PQ and simply fill the PQ with events and stop the input - then start a set of filters and outputs and drain the queue and stop.

With the above in mind...

  • Q. What stops the inputs from receiving more and more events?
  • A. The PQ being full puts the brake on the inputs - as the inputs are in a loop when the loop gets to the PQ write step, the thread waits if the PQ is full.
  • Q. What stops the http input (in server mode) from accepting unlimited connections?
  • A. The http input is set to use a thread pool of 4 threads to process incoming connections from one TCP server. The tcp server is set to accept at most 1024 connections before refusing new connections. The HTTP server runs a loop to select the next TCP server socket and adds the wrapped socket to the thread pool - this add will block if all 4 threads are blocked on the PQ.
  • What is stopping the PQ from being drained?
  • When all worker threads are stuck in either a very slow filter, perhaps one that uses external IO, or in an output that is retrying IO operations; the worker is not able to ack events in the PQ. The acking of events will reduce the size of the PQ - note reading events by itself does not reduce the size of the PQ.

Hope this helps.

You need to take a few Java stack traces near the time of OOM. I can help you understand the stack traces.

Any splitting of events into smaller new events is done after the PQ. Usually the original message is cancelled. A Batch object tracks which are the original events (that must be acked even if cancelled) and which events are generated after the PQ. This allows for replay of the original event if it was never acked - the new events will be generated again because they were never persisted.

Splitting your LS into two separate configs is not really the answer - understanding why the filter + output stages stall is.

Thanks for confirming what I thought how a persistent queue would work. I checked my filters and their stats, none of those seem to be a problem, especially since they work really flawless and performant for some time.

What I though noticed - and it fits your description of back-pressure and the limit of threads/sockets - when logstash stops working I have lots of open sockets. Output in my case to influxdb is not blocking. But when I try to stop logstash I'll get lots of messages in the log:

[2017-10-02T20:48:58,442][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>"[main]<http", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-input-http-3.0.5/lib/logstash/inputs/http.rb:168:in `join'"}], ["LogStash::Filters::Drop", {"id"=>"15c34a8f248d3b1cdf49920834969b014bf6fe99-14"}]=>[{"thread_id"=>20, "name"=>"[main]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:214:in `lock'"}, {"thread_id"=>21, "name"=>"[main]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:214:in `lock'"}, {"thread_id"=>22, "name"=>"[main]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:214:in `lock'"}, {"thread_id"=>23, "name"=>"[main]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:188:in `lock'"}]}}

There is one entry there that makes me curious: stalling thread info: "LogStash::Filters::Drop"
This seems to tell me that threads are blocked by exactly the drop filter?

I'm now running without PQs and it's running significantly longer (8h) without issues while with PQs logstash stopped after 7h (and much lower load). I'm waiting for the OOM death and will take a heapdump to analyze.

How many workers do you have running?

I have reformatted the shutdown_watcher log message.

{

  "inflight_count"=>0,
  "stalling_thread_info"=>{
    "other"=>[
      {"thread_id"=>24, "name"=>"[main]<http", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-input-http-3.0.5/lib/logstash/inputs/http.rb:168:in `join'"}
    ], 
    ["LogStash::Filters::Drop", {"id"=>"15c34a8f248d3b1cdf49920834969b014bf6fe99-14"}]=>[
      {"thread_id"=>20, "name"=>"[main]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:214:in `lock'"},
      {"thread_id"=>21, "name"=>"[main]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:214:in `lock'"},
      {"thread_id"=>22, "name"=>"[main]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:214:in `lock'"},
      {"thread_id"=>23, "name"=>"[main]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:188:in `lock'"}
    ]
  }
}

The "LogStash::Filters::Drop" bit is misleading - its simply the last filter that the thread was known to have called.

What is interesting though, is the line that each thread (id=20 to 23) is blocked on.
Those on line 214 of the wrapped_acked_queue.rb are waiting on the lock to ack a batch.
The one on line 188 of the same file is waiting on the lock to read a batch.
Ironically, the pipeline reporter (via shutdown watcher) that is building the stalling_thread info is not helping the situation by taking the very lock that the threads then report to be waiting on!

Also, investigating the http thread id=24, it is stuck on join on logstash/inputs/http.rb:168 but that thread is stuck in a forever loop - while stop has been called the loop does not exit if the http thread pool is all stuck on a full PQ write.

However, the filters and outputs do not seem to be dead-locked because if they were the shutdown watcher will fail the acquire the lock it needs to build the log line that you are getting so many of.

Some additional PQ concurrency bugs were fixed in 5.6.2, you should upgrade.

Thanks again for the insight and analysis! The philosophers are dining it seems :slight_smile:
I do have 4 worker threads (4 CPUs).
Meanwhile I'm running the configuration without PQ stable for 5 days so I can assume that there are no load spikes causing any malfunction. So will try to upgrade to 5.6.2 as soon as possible and see how that works with PQ then.

:star:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.