How can I solve this error with connection reset by peer?

I keep seeing this error in Logstash and I'm afraid that it's causing data loss. A get a couple a day:

[2020-03-10T15:08:32,460][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5044, remote: 192.196.27.1:2406] Handling exception: Connection reset by peer
[2020-03-10T15:08:32,461][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0_131]
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:1.8.0_131]
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:1.8.0_131]
	at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[?:1.8.0_131]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[?:1.8.0_131]
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1128) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.30.Final.jar:4.1.30.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

I've looked a decent bit on these forums about this issue and everyone seems to say this is an issue with TLS/SSL. However, no one explains how to adjust this. By default (as far as I can tell), the ssl parameter for the beats input for elasticsearch is by default false. My output for filebeat looks like this:

output.logstash:
  # The Logstash hosts
  # hosts: ["localhost:5044"]
  ssl.enabled: false
  hosts: ["ec2-XX-XXX-XXX-XXX.compute-1.amazonaws.com:5044"]

Anyone else know what I need to adjust?

bump please help!

Anyone? This is for filebeat version 7.1.1 and logstash version 7.6, but I also go the error when my logstash was version 7.1.1

From memory connection reset by peer seems like a ssh thing or firewall. The filebeats output is nothing complicated and you are just simply declaring where the logs should go. Providing the URL is correct and the ports on both the filebeat server and logstash server match it should work.

java.io.IOException: Connection reset by peer = The other side has abruptly aborted the connection in midst of a transaction

This is what mine looks like and it is working in production:

---
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/*.log
    - /var/log/apps/*.log
    
  exclude_files: ['\.gz$']
  multiline.pattern: ^\[
  multiline.negate: true
  multiline.match: after

filebeat.config.modules:
  # Glob pattern for configuration loading
  #path: ${path.config}/modules.d/*.yml
  
  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

name: "app1-filebeat"
tags: ["app1", "app"]

output.logstash:
  hosts: ["app1.mydomain.net:2561"]
  ssl.certificate_authorities: ["/usr/share/ca-certificates/mydomain/CA_-_Webserver.crt"]
  ssl.certificate: "/etc/ssl/certs/app1.mydomain.net.crt"
  ssl.key: "/etc/ssl/private/app1.mydomain.net.key"
  ssl.verification_mode: none

Have you been able to make it work without SSL?

Yes but I do not know why you would not use SSL. You can even make your own root CA for this purpose.

What does you Logstash input look like?

Would I need to update the CA ever? I have so many different machines that I manually need to make that if I needed to update this every year, it would be a nightmare.

Why dont you use ansible to take care of that for you?

I thought you only need to update CA if they come from an external root CA. Internal CAs you can set the expiry year to whatever you want.

1 Like

Ansible doesn't work for my implementation sadly. But thanks for your support!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.