7.6.0 still attempting plaintext http

Wondering about these log events on our elected master node:

[2020-03-03T08:46:57,125][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [es-mst2] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/<redacted>:9200, remoteAddress=/<redacted>:38520}
[2020-03-03T08:47:02,131][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [es-mst2] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/<redacted>:9200, remoteAddress=/<redacted>:38526}
[2020-03-03T08:47:07,136][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [es-mst2] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/<redacted>:9200, remoteAddress=/<redacted>:38532}

These seems to come every 5 sec but only ever from our data/ML (DIL) nodes. Have tried to narrowed it further down, but can't seem to ever capture source port vs process by netstat and tcpdump doesn't unveil further, might it possible be the elasticsearch instance it self attempting http now and then over https?

All our elastic nodes have these http config settings:

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.supported_protocols: [ "TLSv1.2", "TLSv1.1" ]
xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12

Anyone got (a) hint(s) on this?
Btw it's not just the elected master but all our master nodes that logs these plaintext events only ever from our data nodes approx every 5 sec :confused:

[2020-03-10T14:35:46,840][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [es-mst2] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/62.243.41.231:9200, remoteAddress=/62.243.41.175:56410}

tried to capture with tcpdump:

14:35:46.839518 IP d1r1n11.<redacted>.56410 > es-mst2.<redacted>.9200: Flags [S], seq 1921166537, win 26880, options [mss 8960,sackOK,TS val 696629784 ecr 0,nop,wscale 7], length 0
14:35:46.839547 IP es-mst2.<redacted>.9200 > d1r1n11.<redacted>.56410: Flags [S.], seq 1457143921, ack 1921166538, win 26844, options [mss 8960,sackOK,TS val 3529405069 ecr 696629784,nop,wscale 7], length 0
14:35:46.839826 IP d1r1n11.<redacted>.56410 > es-mst2.<redacted>.9200: Flags [.], ack 1, win 210, options [nop,nop,TS val 696629785 ecr 3529405069], length 0
14:35:46.839864 IP d1r1n11.<redacted>.56410 > es-mst2.<redacted>.9200: Flags [P.], seq 1:211, ack 1, win 210, options [nop,nop,TS val 696629785 ecr 3529405069], length 210
14:35:46.839870 IP es-mst2.<redacted>.9200 > d1r1n11.<redacted>.56410: Flags [.], ack 211, win 219, options [nop,nop,TS val 3529405069 ecr 696629785], length 0
14:35:46.840134 IP es-mst2.<redacted>.9200 > d1r1n11.<redacted>.56410: Flags [F.], seq 1, ack 211, win 219, options [nop,nop,TS val 3529405070 ecr 696629785], length 0
14:35:46.840283 IP d1r1n11.<redacted>.56410 > es-mst2.<redacted>.9200: Flags [F.], seq 211, ack 2, win 210, options [nop,nop,TS val 696629785 ecr 3529405070], length 0
14:35:46.840295 IP es-mst2.<redacted>.9200 > d1r1n11.<redacted>.56410: Flags [.], ack 212, win 219, options [nop,nop,TS val 3529405070 ecr 696629785], length 0

AFAIK Elasticsearch doesn't really talk to itself over HTTP (or HTTPS) so I suspect it's something else. I would normally try something like capturing tcpdump and, at the same time, spamming netstat with something like while [ true ]; do date; sudo netstat -antp; done > netstat.log on the source of this traffic (i.e. d1r1n11). This would normally turn up a process matching one of the ports that appears in the tcpdump output.

Alternatively you can capture the full packets with tcpdump -w packets.pcap and then look for hints of the origin e.g. using Wireshark. Most HTTP clients will try and identify themselves with the User-Agent header, for instance.

Never managed to capture other than metricbeat connecting to master nodes port 9200, only metricbeat are using TLS.

But I then found old running logstash instances blast from the past when nodes were used to also run Cassandra and they were not using TLS back in those days :slight_smile:
They came to live again after a resent reboot. Stopping them also stopped the plaintext warning on masters :wink:

1 Like

Yep, that'd do it. Nice work.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.