Logstash on K8S spammed with Connection Reset By Peer

I have deployed logstash on a remote k8s cluster and created a service for it that opened up port 5044 and 9600. I then use filebeat on my local machine to send logs to the service ip.

It seems that filebeat is working because when I run cat filebeat | grep logstash to see my filebeat logs, they look like this:

2019-06-01T15:28:00.032+0800	DEBUG	[logstash]	logstash/async.go:159	1 events out of 1 events sent to logstash host xx.xx.xxx.xx:5044. Continue sending
2019-06-01T15:29:15.058+0800	DEBUG	[logstash]	logstash/async.go:159	1 events out of 1 events sent to logstash host xx.xx.xxx.xx:5044. Continue sending
2019-06-01T15:29:15.058+0800	DEBUG	[logstash]	logstash/async.go:116	close connection
2019-06-01T15:29:15.058+0800	ERROR	logstash/async.go:256	Failed to publish events caused by: write tcp xx.xx.xxx.xx:62119->xx.xx.xxx.xx:5044: write: broken pipe
2019-06-01T15:29:15.058+0800	DEBUG	[logstash]	logstash/async.go:116	close connection
2019-06-01T15:29:16.448+0800	DEBUG	[logstash]	logstash/async.go:111	connect
2019-06-01T15:29:16.484+0800	DEBUG	[logstash]	logstash/async.go:159	1 events out of 1 events sent to logstash host xx.xx.xxx.xx:5044. Continue sending
2019-06-01T15:29:20.065+0800	DEBUG	[logstash]	logstash/async.go:159	1 events out of 1 events sent to logstash host xx.xx.xxx.xx:5044. Continue sending
2019-06-01T15:29:40.076+0800	DEBUG	[logstash]	logstash/async.go:159	1 events out of 1 events sent to logstash host xx.xx.xxx.xx:5044. Continue sending
2019-06-01T15:29:45.078+0800	DEBUG	[logstash]	logstash/async.go:159	1 events out of 1 events sent to logstash host xx.xx.xxx.xx:5044. Continue sending

However, my logstash logs are littered with this:

[2019-06-01T07:31:45,931][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5044, remote: undefined] Handling exception: Connection reset by peer
[2019-06-01T07:31:45,931][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0_212]
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:1.8.0_212]
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:1.8.0_212]
	at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[?:1.8.0_212]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[?:1.8.0_212]
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1128) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.30.Final.jar:4.1.30.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]

For reference, here is my logstash.conf:

input {
	beats {
		port => 5044
    ssl  => false
	}
}
filter {
	grok {
		match => {
			"message" => [
				"\[%{TIMESTAMP_ISO8601:log_timestamp}\] \[%{WORD:log_type}\].*- %{GREEDYDATA:log_message}",
				"%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:minute} %{ISO8601_TIMEZONE:timezone}.*%{WORD:method} %{URIPATHPARAM:request} %{NUMBER:status}.*- %{NUMBER:duration}"
			]
		}
	}
}
output {
	stdout { codec => rubydebug }
}

What is going on? I am certain that the grok patterns work because I already tested filebeat and logstash locally. There are no logs other than connection reset by peer being outputted in my k8s logstash logs so I think this has to do with logstash

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.