Create Elasticsearch Cluster (newbie)

Hi guys,

I downloaded the ElasticSearch zip for Windows, I extrated it and made three copies of the folder. Then I configured in every folder the elasticsearch.yml.

cluster.name: elasticsearch-dev
node.name: masterNode1
node.master: true
network.host: 127.0.0.1
http.port: 9201
discovery.zen.ping.unicast.hosts: ["127.0.0.1::9202","127.0.0.1::9203"]
discovery.zen.minimum_master_nodes: 3

cluster.name: elasticsearch-dev
node.name: masterNode2
node.master: true
network.host: 127.0.0.2
http.port: 9202
discovery.zen.ping.unicast.hosts: ["127.0.0.1::9201","127.0.0.1::9203"]
discovery.zen.minimum_master_nodes: 3

cluster.name: elasticsearch-dev
node.name: masterNode3
node.master: true
network.host: 127.0.0.3
http.port: 9203
discovery.zen.ping.unicast.hosts: ["127.0.0.1::9201","127.0.0.1::9202"]
discovery.zen.minimum_master_nodes: 3

Then I started the bin/elasticsearch.bat file. I get in every console window the message: failed to resolve host [127.0.0.x]

x = 1, 2 or 3

What is wrong in my configuration?

Kind regards

Elasticsearch internally uses the transport protocol for communication, so you need to specify the transport port, e.g. 9300, 9301, 9302, for discovery.zen.ping.unicast.hosts.

I did this:

discovery.zen.ping.unicast.hosts: ["127.0.0.1::9201","127.0.0.1::9202"]

is something wrong with the configuration?

Yes. As Christian said, the port are 9300, 9301, ...

Got it. I changed the configuration. But it's still not working.

I get the message: failed to resolve hot [127.0.0.1::9301] then the process stopps:
not enough master nodes discovered during pinging.
[masterNode1] stopped
[masterNode1] closing ...
[masterNode1] closed...

cluster.name: elasticsearch-dev
node.name: masterNode1
node.master: true
network.host: 127.0.0.1
http.port: 9300
discovery.zen.ping.unicast.hosts: ["127.0.0.2::9301","127.0.0.3::9302"]
discovery.zen.minimum_master_nodes: 3

cluster.name: elasticsearch-dev
node.name: masterNode2
node.master: true
network.host: 127.0.0.2
http.port: 9301
discovery.zen.ping.unicast.hosts: ["127.0.0.1::9300","127.0.0.3::9302"]
discovery.zen.minimum_master_nodes: 3

cluster.name: elasticsearch-dev
node.name: masterNode3
node.master: true
network.host: 127.0.0.3
http.port: 9302
discovery.zen.ping.unicast.hosts: ["127.0.0.1::9300","127.0.0.2::9301"]
discovery.zen.minimum_master_nodes: 3

I also set the host to: 127.0.0.1 in all configuration files. But this did not work.

Instead of:

127.0.0.1::9301

Use:

127.0.0.1:9301

HTTP port should be in the 9200-9299 range as it was before.

It is only here you should use ports 9300, 9301 and 9302.

Thank you for your support.

It's still not working. To identify the problem I started just one instance of elasticsearch. It got the following exception

X:\Analytics\elastic\elasticsearch cluster\elasticsearch-dev-3\masterNode-1\bin>elasticsearch
[2018-03-05T18:10:02,981][INFO ][o.e.n.Node               ] [masterNode1] initializing ...
...
[2018-03-05T18:10:06,393][INFO ][o.e.d.DiscoveryModule    ] [masterNode1] using discovery type [zen]
[2018-03-05T18:10:07,316][INFO ][o.e.n.Node               ] [masterNode1] initialized
[2018-03-05T18:10:07,316][INFO ][o.e.n.Node               ] [masterNode1] starting ...
[2018-03-05T18:10:07,864][INFO ][o.e.t.TransportService   ] [masterNode1] publish_address {127.0.0.1:9300}, bound_addres
ses {127.0.0.1:9300}
...
usterStateVersion=-1}]], but needed [3]), pinging again
[2018-03-05T18:10:35,095][WARN ][o.e.d.z.ZenDiscovery     ] [masterNode1] not enough master nodes discovered during ping
ing (found [[Candidate{node={masterNode1}{4ON6Q6j-ROWXYPXtHMlHYg}{GrVsZwR-QeiqRkSgOoEamA}{127.0.0.1}{127.0.0.1:9300}, cl
usterStateVersion=-1}]], but needed [3]), pinging again
[2018-03-05T18:10:38,006][WARN ][o.e.n.Node               ] [masterNode1] timed out while waiting for initial discovery
state - timeout: 30s
[2018-03-05T18:10:38,116][WARN ][o.e.d.z.ZenDiscovery     ] [masterNode1] not enough master nodes discovered during ping
ing (found [[Candidate{node={masterNode1}{4ON6Q6j-ROWXYPXtHMlHYg}{GrVsZwR-QeiqRkSgOoEamA}{127.0.0.1}{127.0.0.1:9300}, cl
usterStateVersion=-1}]], but needed [3]), pinging again
[2018-03-05T18:10:38,132][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [masterNode1] uncaught exception in thread
 [main]
org.elasticsearch.bootstrap.StartupException: BindHttpException[Failed to bind to [9300]]; nested: BindException[Address
 already in use: bind];
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.2.2.jar:6.2.2]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.2.2.jar:6.2.2]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.2.
jar:6.2.2]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.2.2.jar:6.2.2]

        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.2.jar:6.2.2]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.2.jar:6.2.2]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.2.jar:6.2.2]
Caused by: org.elasticsearch.http.BindHttpException: Failed to bind to [9300]
        at org.elasticsearch.http.netty4.Netty4HttpServerTransport.bindAddress(Netty4HttpServerTransport.java:436) ~[?:?
]
        at org.elasticsearch.http.netty4.Netty4HttpServerTransport.createBoundHttpAddress(Netty4HttpServerTransport.java
:337) ~[?:?]
        at org.elasticsearch.http.netty4.Netty4HttpServerTransport.doStart(Netty4HttpServerTransport.java:314) ~[?:?]
        at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:66) ~[ela
sticsearch-6.2.2.jar:6.2.2]
        at org.elasticsearch.node.Node.start(Node.java:690) ~[elasticsearch-6.2.2.jar:6.2.2]
        at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:262) ~[elasticsearch-6.2.2.jar:6.2.2]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:332) ~[elasticsearch-6.2.2.jar:6.2.2]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.2.jar:6.2.2]
        ... 6 more
Caused by: java.net.BindException: Address already in use: bind
        at sun.nio.ch.Net.bind0(Native Method) ~[?:?]
        at sun.nio.ch.Net.bind(Net.java:433) ~[?:?]
        at sun.nio.ch.Net.bind(Net.java:425) ~[?:?]
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:?]
        at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:128) ~[?:?]
        at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:558) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1283) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:501) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:486) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:989) ~[?:?]
        at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:254) ~[?:?]
        at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:365) ~[?:?]
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]
[2018-03-05T18:10:39,133][INFO ][o.e.n.Node               ] [masterNode1] stopping ...
[2018-03-05T18:10:39,133][WARN ][o.e.d.z.ZenDiscovery     ] [masterNode1] not enough master nodes discovered during ping
ing (found [[Candidate{node={masterNode1}{4ON6Q6j-ROWXYPXtHMlHYg}{GrVsZwR-QeiqRkSgOoEamA}{127.0.0.1}{127.0.0.1:9300}, cl
usterStateVersion=-1}]], but needed [3]), pinging again
[2018-03-05T18:10:41,231][INFO ][o.e.n.Node               ] [masterNode1] stopped
[2018-03-05T18:10:41,231][INFO ][o.e.n.Node               ] [masterNode1] closing ...
[2018-03-05T18:10:41,231][INFO ][o.e.n.Node               ] [masterNode1] closed

PS: I have no other service running. I don't understand why the address is already in use. I checked with resmon if the port is in use by another application. Its not. It's free.

PSS: I have another copy of Elasticsearch, its just a single node. I configured it to use the port 9300. I got the same exception that I pasted above.

I found the log files from log4j. There is the following exception at the end of the file:

[2018-03-06T09:42:56,781][WARN ][o.e.d.z.ZenDiscovery     ] [masterNode2] not enough master nodes discovered during pinging (found [[Candidate{node={masterNode2}{4ON6Q6j-ROWXYPXtHMlHYg}{fqo9STjRRdSU55H3WEvV0Q}{127.0.0.2}{127.0.0.2:9300}, clusterStateVersion=-1}]], but needed [3]), pinging again
[2018-03-06T09:42:59,784][WARN ][o.e.d.z.ZenDiscovery     ] [masterNode2] not enough master nodes discovered during pinging (found [[Candidate{node={masterNode2}{4ON6Q6j-ROWXYPXtHMlHYg}{fqo9STjRRdSU55H3WEvV0Q}{127.0.0.2}{127.0.0.2:9300}, clusterStateVersion=-1}]], but needed [3]), pinging again
[2018-03-06T09:42:59,788][WARN ][o.e.t.n.Netty4Transport  ] [masterNode2] exception caught on transport layer [NettyTcpChannel{localAddress=/127.0.0.1:46821, remoteAddress=/127.0.0.3:9303}], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
...
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1283) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 19 more
[2018-03-06T09:42:59,808][WARN ][o.e.t.n.Netty4Transport  ] [masterNode2] exception caught on transport layer [NettyTcpChannel{localAddress=/127.0.0.1:46821, remoteAddress=/127.0.0.3:9303}], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:392) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
...
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1283) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 20 more

As it looks like you are trying to set up the 3 nodes on a single host, try the following config:

cluster.name: elasticsearch-dev
node.name: masterNode1
node.master: true
network.host: 127.0.0.1
http.port: 9200
discovery.zen.ping.unicast.hosts: ["127.0.0.1::9300","127.0.0.1::9301","127.0.0.1::9302"]
discovery.zen.minimum_master_nodes: 2

cluster.name: elasticsearch-dev
node.name: masterNode2
node.master: true
network.host: 127.0.0.1
http.port: 9201
discovery.zen.ping.unicast.hosts: ["127.0.0.1::9300","127.0.0.1::9301","127.0.0.1::9302"]
discovery.zen.minimum_master_nodes: 2

cluster.name: elasticsearch-dev
node.name: masterNode3
node.master: true
network.host: 127.0.0.1
http.port: 9202
discovery.zen.ping.unicast.hosts: ["127.0.0.1::9300","127.0.0.1::9301","127.0.0.1::9302"]
discovery.zen.minimum_master_nodes: 2

Yes, I have just one machine. And I would like to create a small cluster.

Why do you use 9200 in the above setting and 9300 in the other?

What is the correct format. Two : or just one?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.