Exception caught on transport layer - how to change transport module port?


#1

Hi,

I am trying to install elasticsearch 5.2.0 on redhat 6.7 out of tar.gz.
I am not using rpm because on the same machine an older version of elk is running in parallel.

After start of the new ES I get following output:

[2017-02-06T11:32:14,853][WARN ][o.e.b.BootstrapChecks    ] [node-1_logiprodcontrol] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[2017-02-06T11:32:15,419][WARN ][o.e.t.n.Netty4Transport  ] [node-1_logiprodcontrol] exception caught on transport layer [[id: 0x7b1cc478, L:/0:0:0:0:0:0:0:1:49402 - R:/0:0:0:0:0:0:0:1:9300]], closing connection
java.io.EOFException: tried to read: 112 bytes but only 60 remaining
	at org.elasticsearch.transport.netty4.ByteBufStreamInput.ensureCanReadBytes(ByteBufStreamInput.java:75) ~[?:?]
	at org.elasticsearch.common.io.stream.FilterStreamInput.ensureCanReadBytes(FilterStreamInput.java:80) ~[elasticsearch-5.2.0.jar:5.2.0]
	at org.elasticsearch.common.io.stream.StreamInput.readArraySize(StreamInput.java:925) ~[elasticsearch-5.2.0.jar:5.2.0]
	at org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:342) ~[elasticsearch-5.2.0.jar:5.2.0]
	at org.elasticsearch.common.io.stream.StreamInput.readList(StreamInput.java:885) ~[elasticsearch-5.2.0.jar:5.2.0]
	at org.elasticsearch.common.io.stream.StreamInput.readMapOfLists(StreamInput.java:479) ~[elasticsearch-5.2.0.jar:5.2.0]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.<init>(ThreadContext.java:335) ~[elasticsearch-5.2.0.jar:5.2.0]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.<init>(ThreadContext.java:322) ~[elasticsearch-5.2.0.jar:5.2.0]
	at org.elasticsearch.common.util.concurrent.ThreadContext.readHeaders(ThreadContext.java:184) ~[elasticsearch-5.2.0.jar:5.2.0]
	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1327) ~[elasticsearch-5.2.0.jar:5.2.0]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[transport-netty4-5.2.0.jar:5.2.0]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) [netty-codec-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280) [netty-codec-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:396) [netty-codec-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) [netty-codec-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.7.Final.jar:4.1.7.Final]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
...
...
...
[2017-02-06T11:32:18,049][INFO ][o.e.c.s.ClusterService   ] [node-1_logiprodcontrol] new_master {node-1_logiprodcontrol}{7e64BlXmSf-K_XjQIaA43g}{VutrzuhaTAiUDfqxYGQ_rw}{127.0.0.1}{127.0.0.1:9301}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-02-06T11:32:18,070][INFO ][o.e.h.HttpServer         ] [node-1_logiprodcontrol] publish_address {127.0.0.1:8200}, bound_addresses {[::1]:8200}, {127.0.0.1:8200}
[2017-02-06T11:32:18,070][INFO ][o.e.n.Node               ] [node-1_logiprodcontrol] started
[2017-02-06T11:32:18,103][INFO ][o.e.g.GatewayService     ] [node-1_logiprodcontrol] recovered [0] indices into cluster_state
[2017-02-06T11:32:28,426][INFO ][o.e.n.Node               ] [node-1_logiprodcontrol] stopping ...
[2017-02-06T11:32:28,457][INFO ][o.e.n.Node               ] [node-1_logiprodcontrol] stopped
[2017-02-06T11:32:28,457][INFO ][o.e.n.Node               ] [node-1_logiprodcontrol] closing ...
[2017-02-06T11:32:28,464][INFO ][o.e.n.Node               ] [node-1_logiprodcontrol] closed

#2

Here is my elasticsearch.yml

cluster.name: logicontrol_5x
node.name: node-1_logiprodcontrol
path.data: /usr/local/elk/elasticsearch/data
path.logs: /var/log/elasticsearch5
http.port: 8200

The old elk stack is using following ports
ES: 9200 (http), 9300 (transport)
kibana: 5601
logstash: 5544

([2017-02-06T11:32:15,419][WARN ][o.e.t.n.Netty4Transport ] [node-1_logiprodcontrol] exception caught on transport layer [[id: 0x7b1cc478, L:/0:0:0:0:0:0:0:1:49402 - R:/0:0:0:0:0:0:0:1:9300]], closing connection
java.io.EOFException: tried to read: 112 bytes but only 60 remaining)

Looks as if the ES 5.2 is trying to connect to the transport protocol at :9300 of the old ES 1.x.
How can I stop this? How In which file I can change the transport protocol port to lets say 8200?

I do want both instances of elk stack to act independently until the migration is done and old ELK can be shut down.

Thanks, Andreas


(Mark Harwood) #3

The elasticsearch.yml config file has a transport.tcp.port setting. Check out the networking docs.


#4

Thanks for the fast reply.
Is the whole port range changed from 9300-9400 to 8300-8400 if I set the port as described by you?


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.