Hello,
I'm trying to set up logstash to receive logs from elastic agents with output to elasticsearch.
Everything goes well, elasticsearch receives correctly the logs but on logstash side I'm flooded of this type of warning logs
[2024-08-01T16:09:48,614][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-08-01T16:09:48,970][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-08-01T16:09:48,999][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-08-01T16:09:49,014][INFO ][org.logstash.beats.Server][main][241f6d079098eb5c69533228b8a893a17ac389de7e01f54e163f16eaeb49495e] Starting server on port: 5044
[2024-08-01T16:09:51,936][INFO ][org.logstash.beats.BeatsHandler][main][241f6d079098eb5c69533228b8a893a17ac389de7e01f54e163f16eaeb49495e] [local: 10.100.1.114:5044, remote: 10.100.1.20:56416] Handling exception: java.net.SocketException: Connection reset (caused by: java.net.SocketException: Connection reset)
[2024-08-01T16:09:51,937][WARN ][io.netty.channel.DefaultChannelPipeline][main][241f6d079098eb5c69533228b8a893a17ac389de7e01f54e163f16eaeb49495e] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.net.SocketException: Connection reset
at sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) ~[?:?]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) ~[?:?]
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:255) ~[netty-buffer-4.1.109.Final.jar:4.1.109.Final]
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) ~[netty-buffer-4.1.109.Final.jar:4.1.109.Final]
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:357) ~[netty-transport-4.1.109.Final.jar:4.1.109.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) ~[netty-transport-4.1.109.Final.jar:4.1.109.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[netty-transport-4.1.109.Final.jar:4.1.109.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.109.Final.jar:4.1.109.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.109.Final.jar:4.1.109.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.109.Final.jar:4.1.109.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.109.Final.jar:4.1.109.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.109.Final.jar:4.1.109.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.109.Final.jar:4.1.109.Final]
at java.lang.Thread.run(Thread.java:840) [?:?]
I'm pretty sure that the problem is in the input phase ( so the beats plugin), I've commented out filters and output, but the Connection reset persist.
My logstash pipeline conf (input):
input {
elastic_agent {
port => 5044
ssl => true
ssl_certificate_authorities => ["/certs/ca/ca.crt"]
ssl_certificate => "/certs/logstash/logstash.crt"
ssl_key => "/certs/logstash/logstash.pkcs8.key"
ssl_verify_mode => "force_peer"
}
}
Between the agents and the logstash server there is no firewall/load balancers ecc...
Similar problem on a github issue -> Github Issue
How I can troubleshoot what is going wrong?
Thanks in advance for the help