Out of Memory at Beats Input

Hi,
From time to time we get this error with our logstashes:

2025-03-19T16:13:36,640][INFO ][org.logstash.beats.BeatsHandler][beats-input] [local: 192.168.16.76:6044, remote: 192.168.15.127:52442] Handling exception:java.lang.OutOfMemoryError: Cannot reserve 67108864 bytes of direct buffer memory (allocated: 14026138464, limit: 14038335488) (caused by: java.lang.OutOfMemoryError: Cannot reserve 67108864 bytes of direct buffer memory (allocated: 14026138464, limit: 14038335488))

This error causes ingestion to stop but logstash keeps running. Is this a known issue? Version is 8.16.3.

Thx
D

Maybe. Take a look at this issue (and the unmerged PR it links to).

If I understand it correctly (which I may not) then it can happen if logstash cannot keep up with the traffic that beats are sending.

Instead of applying backpressure to slow down the senders, Netty just keeps growing direct memory buffers and runs OOM.

Other issues that link to that one suggest that adjusting pipelining in the beats client can help. And yes, I understand why that might not be a helpful suggestion.

1 Like

That reply was helpful and I've set pipeline.buffer.type to heap. It has improved matters but I'm still seeing odd log entries, from time to time, such is this:

[2025-04-15T02:35:11,517][WARN ][io.netty.channel.DefaultChannelPipeline][main][beats-input] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.lang.OutOfMemoryError: Java heap space
	at io.netty.util.internal.PlatformDependent.allocateUninitializedArray(PlatformDependent.java:326) ~[netty-common-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.buffer.PoolArena$HeapArena.newByteArray(PoolArena.java:628) ~[netty-buffer-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:647) ~[netty-buffer-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:213) ~[netty-buffer-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:195) ~[netty-buffer-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.buffer.PoolArena.allocate(PoolArena.java:137) ~[netty-buffer-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.buffer.PoolArena.reallocate(PoolArena.java:317) ~[netty-buffer-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:123) ~[netty-buffer-4.1.115.Final.jar:4.1.115.Final]
	at org.logstash.beats.V2Batch.addMessage(V2Batch.java:103) ~[logstash-input-beats-6.9.1.jar:?]
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:202) ~[logstash-input-beats-6.9.1.jar:?]
	at org.logstash.beats.BeatsParser.lambda$decode$0(BeatsParser.java:191) ~[logstash-input-beats-6.9.1.jar:?]
	at org.logstash.beats.BeatsParser$$Lambda/0x00000002024974d0.accept(Unknown Source) ~[?:?]
	at org.logstash.beats.BeatsParser.inflateCompressedFrame(BeatsParser.java:223) ~[logstash-input-beats-6.9.1.jar:?]
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:185) ~[logstash-input-beats-6.9.1.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:530) ~[netty-codec-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:469) ~[netty-codec-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:61) ~[netty-transport-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:425) ~[netty-transport-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) ~[netty-common-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-common-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.115.Final.jar:4.1.115.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.115.Final.jar:4.1.115.Final]
	at java.lang.Thread.runWith(Thread.java:1596) ~[?:?]
	at java.lang.Thread.run(Thread.java:1583) ~[?:?]

or this:

[2025-04-15T09:14:25,771][INFO ][org.logstash.beats.BeatsHandler][main][beats-input] [local: 192.168.77.138:5044, remote: 192.168.71.141:55178] Handling exception: java.lang.OutOfMemoryError: Cannot reserve 67108864 bytes of direct buffer memory (allocated: 1409539250, limit: 1429209088) (caused by: java.lang.OutOfMemoryError: Cannot reserve 67108864 bytes of direct buffer memory (allocated: 1409539250, limit: 1429209088))

Given that beats should no longer be using direct memory this is concerning. Also, the error relates to netty. Is this due to a problem with netty or how the beats-input is using netty?

I wonder if -Dio.netty.leakDetection.level=advanced would throw any light on what is happening.