Node throwing exception: NotSslRecordException


(Stephen Patten) #1

After installing x-pack and running up the cluster, one node is continually throwing this error whether or not the other nodes in the cluster are up or down.

[2018-02-02T10:19:28,357][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [lidt20elsrch01] caught exception while handling client http traffic, closing connection [id: 0xc85c6079, L:0.0.0.0/0.0.0.0:9200 ! R:/172.20.141.29:57781]
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: removed this text
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: removed this text
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1103) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]

cluster.name: elasticsearch-test
path.data: d:_data\elasticsearch\data
path.logs: d:_logs\elasticsearch\logs
node.name: lidt20elsrch01
network.host: lidt20elsrch01.example.net
xpack.ssl.key: certs/lidt20elsrch01.key
xpack.ssl.certificate: certs/lidt20elsrch01.crt
xpack.ssl.certificate_authorities: certs/CA.crt
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
discovery.zen.ping.unicast.hosts: [ 'lidt20elsrch01.example.net', 'lidt20elsrch02.example.net', 'lidt20elsrch03.example.net']
node.max_local_storage_nodes: 3

The other nodes come up fine and work together and are NOT exhibiting this behaviour.

I've seen other questions about this, but no concrete answers or resolutions, wondering what I should focus on?


(Tim Vernum) #2

Port 9200 is the HTTP port. So something is connecting to that port and trying to make a clear-text http connection rather than a TLS https connection.
I can't tell you what process that is, but it's running on the 172.20.141.29 machine, so that would be the place to start.


(Stephen Patten) #3

Tim,

Thanks again! I actually had Jared Carey help me with this and we ended up blocking the IP address.

It turns out when our IT dept told me what that IP address/machine was they had old information, it was actually a prior install of Logstash running 5.x, which BTW I hadn't gotten to installing yet. When we blocked the IP address in the node settings the cluster performed correctly.

Next item was to install LS, where I quickly understood the error that had been made. Setting up LS with TLS solved the problem.


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.