Wrong config?

Hi all,

I'm just looking at elasticsearch 7.0.1. And my nodes are not entering the cluster.

What is wrong with my config?

cluster.name: es-cluster
node.name: elasticsearch-1
node.master: true
node.data: true
path.data: /var/lib/elasticsearch 
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["10.250.5.77", "10.250.5.69"]
cluster.initial_master_nodes: ["10.250.5.77", "10.250.5.69", "10.250.5.68"]

cluster.name: es-cluster
node.name: elasticsearch-2
node.master: true
node.data: true
path.data: /var/lib/elasticsearch 
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["10.250.5.77", "10.250.5.69"]
cluster.initial_master_nodes: ["10.250.5.77", "10.250.5.69", "10.250.5.68"]

cluster.name: es-cluster
node.name: elasticsearch-3
node.master: true
node.data: true
path.data: /var/lib/elasticsearch 
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["10.250.5.77", "10.250.5.69"]
cluster.initial_master_nodes: ["10.250.5.77", "10.250.5.69", "10.250.5.68"]

Thank you in advance!

jerry

Can you share the full logs from each of the 3 nodes? They'll tell us much more about what's going wrong.

Why do you only have 2 nodes listed in discovery.seed_hosts? Normally you'd put all three nodes in there.

It's normally simpler to put the node names into cluster.initial_master_nodes rather than the addresses.

here is my curl command:

elasticsearch-3:/etc/elasticsearch# curl http://127.0.0.1:9200/_cat/nodes?v
ip          heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.250.5.77           32          43   1    0.00    0.03     0.01 mdi       *      elasticsearch-3

And the full log from node3:

[2019-05-10T07:42:49,448][INFO ][o.e.n.Node               ] [elasticsearch-3] stopping ...
[2019-05-10T07:42:49,502][INFO ][o.e.x.w.WatcherService   ] [elasticsearch-3] stopping watch service, reason [shutdown initiated]
[2019-05-10T07:42:49,991][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [elasticsearch-3] [controller/3369] [Main.cc@148] Ml controller exiting
[2019-05-10T07:42:49,994][INFO ][o.e.x.m.p.NativeController] [elasticsearch-3] Native controller process has stopped - no new native processes can be started
[2019-05-10T07:42:50,007][INFO ][o.e.n.Node               ] [elasticsearch-3] stopped
[2019-05-10T07:42:50,007][INFO ][o.e.n.Node               ] [elasticsearch-3] closing ...
[2019-05-10T07:42:50,028][INFO ][o.e.n.Node               ] [elasticsearch-3] closed
[2019-05-10T07:42:52,689][INFO ][o.e.e.NodeEnvironment    ] [elasticsearch-3] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [16.6gb], net total_space [19.6gb], types [ext4]
[2019-05-10T07:42:52,695][INFO ][o.e.e.NodeEnvironment    ] [elasticsearch-3] heap size [990.7mb], compressed ordinary object pointers [true]
[2019-05-10T07:42:52,699][INFO ][o.e.n.Node               ] [elasticsearch-3] node name [elasticsearch-3], node ID [KPyK8TYiRCON_gGS85JTDA]
[2019-05-10T07:42:52,699][INFO ][o.e.n.Node               ] [elasticsearch-3] version[7.0.1], pid[5524], build[default/deb/e4efcb5/2019-04-29T12:56:03.145736Z], OS[Linux/4.9.0-9-amd64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/12.0.1/12.0.1+12]
[2019-05-10T07:42:52,700][INFO ][o.e.n.Node               ] [elasticsearch-3] JVM home [/usr/share/elasticsearch/jdk]
[2019-05-10T07:42:52,700][INFO ][o.e.n.Node               ] [elasticsearch-3] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-1569447324873190533, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -Dio.netty.allocator.type=unpooled, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=deb, -Des.bundled_jdk=true]
[2019-05-10T07:42:54,265][INFO ][o.e.p.PluginsService     ] [elasticsearch-3] loaded module [aggs-matrix-stats]
[2019-05-10T07:42:54,266][INFO ][o.e.p.PluginsService     ] [elasticsearch-3] loaded module [analysis-common]
[[...]

[2019-05-10T07:42:54,271][INFO ][o.e.p.PluginsService     ] [elasticsearch-3] loaded module [x-pack-sql]
[2019-05-10T07:42:54,272][INFO ][o.e.p.PluginsService     ] [elasticsearch-3] loaded module [x-pack-watcher]
[2019-05-10T07:42:54,272][INFO ][o.e.p.PluginsService     ] [elasticsearch-3] no plugins loaded
[2019-05-10T07:42:58,341][INFO ][o.e.x.s.a.s.FileRolesStore] [elasticsearch-3] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2019-05-10T07:42:58,944][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [elasticsearch-3] [controller/5609] [Main.cc@109] controller (64 bit): Version 7.0.1 (Build 6a88928693d862) Copyright (c) 2019 Elasticsearch BV
[2019-05-10T07:42:59,411][DEBUG][o.e.a.ActionModule       ] [elasticsearch-3] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-05-10T07:42:59,801][INFO ][o.e.d.DiscoveryModule    ] [elasticsearch-3] using discovery type [zen] and seed hosts providers [settings]
[2019-05-10T07:43:00,605][INFO ][o.e.n.Node               ] [elasticsearch-3] initialized
[2019-05-10T07:43:00,605][INFO ][o.e.n.Node               ] [elasticsearch-3] starting ...
[2019-05-10T07:43:00,726][INFO ][o.e.t.TransportService   ] [elasticsearch-3] publish_address {10.250.5.77:9300}, bound_addresses {[::]:9300}
[2019-05-10T07:43:00,736][INFO ][o.e.b.BootstrapChecks    ] [elasticsearch-3] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-05-10T07:43:01,006][INFO ][o.e.c.s.MasterService    ] [elasticsearch-3] elected-as-master ([1] nodes joined)[{elasticsearch-3}{KPyK8TYiRCON_gGS85JTDA}{2a3ZSc2wRmizzIzLMhX7EA}{10.250.5.77}{10.250.5.77:9300}{ml.machine_memory=4147687424, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 9, version: 37, reason: master node changed {previous [], current [{elasticsearch-3}{KPyK8TYiRCON_gGS85JTDA}{2a3ZSc2wRmizzIzLMhX7EA}{10.250.5.77}{10.250.5.77:9300}{ml.machine_memory=4147687424, xpack.installed=true, ml.max_open_jobs=20}]}
[2019-05-10T07:43:01,261][INFO ][o.e.c.s.ClusterApplierService] [elasticsearch-3] master node changed {previous [], current [{elasticsearch-3}{KPyK8TYiRCON_gGS85JTDA}{2a3ZSc2wRmizzIzLMhX7EA}{10.250.5.77}{10.250.5.77:9300}{ml.machine_memory=4147687424, xpack.installed=true, ml.max_open_jobs=20}]}, term: 9, version: 37, reason: Publication{term=9, version=37}
[2019-05-10T07:43:01,307][INFO ][o.e.h.AbstractHttpServerTransport] [elasticsearch-3] publish_address {10.250.5.77:9200}, bound_addresses {[::]:9200}
[2019-05-10T07:43:01,308][INFO ][o.e.n.Node               ] [elasticsearch-3] started
[2019-05-10T07:43:01,543][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [elasticsearch-3] Failed to clear cache for realms [[]]
[2019-05-10T07:43:01,584][INFO ][o.e.l.LicenseService     ] [elasticsearch-3] license [a81b8de5-0e18-4bfe-a229-ab3725a3bf00] mode [basic] - valid
[2019-05-10T07:43:01,593][INFO ][o.e.g.GatewayService     ] [elasticsearch-3] recovered [0] indices into cluster_state

I've put all three nodes in the config.

jerry

We'll need to compare the logs from all three nodes to see the problem, I think. You've only shared one of them.

node 2:
[2019-05-10T08:01:24,549][INFO ][o.e.d.DiscoveryModule ] [elasticsearch-2] using discovery type [zen] and seed hosts providers [settings]
[2019-05-10T08:01:25,475][INFO ][o.e.n.Node ] [elasticsearch-2] initialized
[2019-05-10T08:01:25,475][INFO ][o.e.n.Node ] [elasticsearch-2] starting ...
[2019-05-10T08:01:25,604][INFO ][o.e.t.TransportService ] [elasticsearch-2] publish_address {10.250.5.69:9300}, bound_addresses {[::]:9300}
[2019-05-10T08:01:25,618][INFO ][o.e.b.BootstrapChecks ] [elasticsearch-2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-05-10T08:01:25,779][INFO ][o.e.c.s.MasterService ] [elasticsearch-2] elected-as-master ([1] nodes joined)[{elasticsearch-2}{1XvYWp1NSpC6UM4WvD4neg}{FeYy-K3_RKmmXplBj50z6w}{10.250.5.69}{10.250.5.69:9300}{ml.machine_memory=4147687424, xpack.installed=true, ml.max_open_jobs=20} elect leader, BECOME_MASTER_TASK, FINISH_ELECTION], term: 9, version: 37, reason: master node changed {previous , current [{elasticsearch-2}{1XvYWp1NSpC6UM4WvD4neg}{FeYy-K3_RKmmXplBj50z6w}{10.250.5.69}{10.250.5.69:9300}{ml.machine_memory=4147687424, xpack.installed=true, ml.max_open_jobs=20}]}
[2019-05-10T08:01:25,945][INFO ][o.e.c.s.ClusterApplierService] [elasticsearch-2] master node changed {previous , current [{elasticsearch-2}{1XvYWp1NSpC6UM4WvD4neg}{FeYy-K3_RKmmXplBj50z6w}{10.250.5.69}{10.250.5.69:9300}{ml.machine_memory=4147687424, xpack.installed=true, ml.max_open_jobs=20}]}, term: 9, version: 37, reason: Publication{term=9, version=37}
[2019-05-10T08:01:25,997][INFO ][o.e.h.AbstractHttpServerTransport] [elasticsearch-2] publish_address {10.250.5.69:9200}, bound_addresses {[::]:9200}
[2019-05-10T08:01:25,998][INFO ][o.e.n.Node ] [elasticsearch-2] started
[2019-05-10T08:01:26,148][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [elasticsearch-2] Failed to clear cache for realms []
[2019-05-10T08:01:26,182][INFO ][o.e.l.LicenseService ] [elasticsearch-2] license [a8fc67f5-d63f-4ae6-b1bf-a4cf998d8708] mode [basic] - valid
[2019-05-10T08:01:26,194][INFO ][o.e.g.GatewayService ] [elasticsearch-2] recovered [0] indices into cluster_state
[2019-05-10T08:09:04,906][WARN ][o.e.t.TcpTransport ] [elasticsearch-2] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.250.5.69:9300, remoteAddress=/10.250.5.77:32798}], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (ff,f4,ff,fd)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [netty-common-4.1.32.Final.jar:4.1.32.Final]
at java.lang.Thread.run(Thread.java:835) [?:?]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (ff,f4,ff,fd)
at org.elasticsearch.transport.TcpTransport.readHeaderBuffer(TcpTransport.java:841) ~[elasticsearch-7.0.1.jar:7.0.1]
at org.elasticsearch.transport.TcpTransport.readMessageLength(TcpTransport.java:827) ~[elasticsearch-7.0.1.jar:7.0.1]
at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:40) ~[transport-netty4-client-7.0.1.jar:7.0.1]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
... 19 more
[2019-05-10T08:09:04,919][WARN ][o.e.t.TcpTransport ] [elasticsearch-2] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.250.5.69:9300, remoteAddress=/10.250.5.77:32798}], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (ff,f4,ff,fd)
at

Node1:

[2019-05-10T08:01:09,156][INFO ][o.e.n.Node               ] [elasticsearch-1] stopping ...
[2019-05-10T08:01:09,228][INFO ][o.e.x.w.WatcherService   ] [elasticsearch-1] stopping watch service, reason [shutdown initiated]
[2019-05-10T08:01:09,731][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [elasticsearch-1] [controller/3792] [Main.cc@148] Ml controller exiting
[2019-05-10T08:01:09,733][INFO ][o.e.x.m.p.NativeController] [elasticsearch-1] Native controller process has stopped - no new native processes can be started
[2019-05-10T08:01:09,750][INFO ][o.e.n.Node               ] [elasticsearch-1] stopped
[2019-05-10T08:01:09,750][INFO ][o.e.n.Node               ] [elasticsearch-1] closing ...
[2019-05-10T08:01:09,770][INFO ][o.e.n.Node               ] [elasticsearch-1] closed
[2019-05-10T08:01:12,379][INFO ][o.e.e.NodeEnvironment    ] [elasticsearch-1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [16.6gb], net total_space [19.6gb], types [ext4]
[2019-05-10T08:01:12,386][INFO ][o.e.e.NodeEnvironment    ] [elasticsearch-1] heap size [990.7mb], compressed ordinary object pointers [true]
[2019-05-10T08:01:12,389][INFO ][o.e.n.Node               ] [elasticsearch-1] node name [elasticsearch-1], node ID [UqAlu5F3TNa2xjEE_P7kXw]
[2019-05-10T08:01:12,390][INFO ][o.e.n.Node               ] [elasticsearch-1] version[7.0.1], pid[6119], build[default/deb/e4efcb5/2019-04-29T12:56:03.145736Z], OS[Linux/4.9.0-9-amd64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/12.0.1/12.0.1+12]
[2019-05-10T08:01:12,391][INFO ][o.e.n.Node               ] [elasticsearch-1] JVM home [/usr/share/elasticsearch/jdk]
[2019-05-10T08:01:12,391][INFO ][o.e.n.Node               ] [elasticsearch-1] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-5419054954159133286, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -Dio.netty.allocator.type=unpooled, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=deb, -Des.bundled_jdk=true]
[2019-05-10T08:01:13,815][INFO ][o.e.p.PluginsService     ] [elasticsearch-1] loaded module [aggs-matrix-stats]
[2019-05-10T08:01:13,815][INFO ][o.e.p.PluginsService     ] [elasticsearch-1] loaded module [analysis-common]
[2019-05-10T08:01:13,815][INFO ][o.e.p.PluginsService     ] [elasticsearch-1] loaded module [ingest-common]
[...]
[2019-05-10T08:01:13,820][INFO ][o.e.p.PluginsService     ] [elasticsearch-1] loaded module [x-pack-watcher]
[2019-05-10T08:01:13,821][INFO ][o.e.p.PluginsService     ] [elasticsearch-1] no plugins loaded
[2019-05-10T08:01:17,565][INFO ][o.e.x.s.a.s.FileRolesStore] [elasticsearch-1] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2019-05-10T08:01:18,153][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [elasticsearch-1] [controller/6204] [Main.cc@109] controller (64 bit): Version 7.0.1 (Build 6a88928693d862) Copyright (c) 2019 Elasticsearch BV
[2019-05-10T08:01:18,589][DEBUG][o.e.a.ActionModule       ] [elasticsearch-1] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-05-10T08:01:19,001][INFO ][o.e.d.DiscoveryModule    ] [elasticsearch-1] using discovery type [zen] and seed hosts providers [settings]
[2019-05-10T08:01:19,764][INFO ][o.e.n.Node               ] [elasticsearch-1] initialized
[2019-05-10T08:01:19,764][INFO ][o.e.n.Node               ] [elasticsearch-1] starting ...
[2019-05-10T08:01:19,878][INFO ][o.e.t.TransportService   ] [elasticsearch-1] publish_address {10.250.5.68:9300}, bound_addresses {[::]:9300}
[2019-05-10T08:01:19,886][INFO ][o.e.b.BootstrapChecks    ] [elasticsearch-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-05-10T08:01:20,080][INFO ][o.e.c.s.MasterService    ] [elasticsearch-1] elected-as-master ([1] nodes joined)[{elasticsearch-1}{UqAlu5F3TNa2xjEE_P7kXw}{OlRJcBPMQv6hsR5unLliYA}{10.250.5.68}{10.250.5.68:9300}{ml.machine_memory=4147687424, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 9, version: 37, reason: master node changed {previous [], current [{elasticsearch-1}{UqAlu5F3TNa2xjEE_P7kXw}{OlRJcBPMQv6hsR5unLliYA}{10.250.5.68}{10.250.5.68:9300}{ml.machine_memory=4147687424, xpack.installed=true, ml.max_open_jobs=20}]}
[2019-05-10T08:01:20,187][INFO ][o.e.c.s.ClusterApplierService] [elasticsearch-1] master node changed {previous [], current [{elasticsearch-1}{UqAlu5F3TNa2xjEE_P7kXw}{OlRJcBPMQv6hsR5unLliYA}{10.250.5.68}{10.250.5.68:9300}{ml.machine_memory=4147687424, xpack.installed=true, ml.max_open_jobs=20}]}, term: 9, version: 37, reason: Publication{term=9, version=37}
[2019-05-10T08:01:20,228][INFO ][o.e.h.AbstractHttpServerTransport] [elasticsearch-1] publish_address {10.250.5.68:9200}, bound_addresses {[::]:9200}
[2019-05-10T08:01:20,229][INFO ][o.e.n.Node               ] [elasticsearch-1] started
[2019-05-10T08:01:20,419][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [elasticsearch-1] Failed to clear cache for realms [[]]
[2019-05-10T08:01:20,476][INFO ][o.e.l.LicenseService     ] [elasticsearch-1] license [f9b11b0c-ac14-4852-9985-baf2a69f6c2d] mode [basic] - valid
[2019-05-10T08:01:20,486][INFO ][o.e.g.GatewayService     ] [elasticsearch-1] recovered [0] indices into cluster_state

Thanks. It looks like at some point in the past all three nodes were started as separate clusters (i.e. cluster.initial_master_nodes wasn't set). You can't merge clusters together once they've formed. Do these nodes hold any data yet? If not, I suggest you shut them down, wipe their data paths, and start them again with cluster.initial_master_nodes set, then they will form into a single cluster.

1 Like

THX David!

Solved.

curl http://127.0.0.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.250.5.77 26 44 26 0.62 0.20 0.12 mdi - elasticsearch-3
10.250.5.69 30 45 27 0.73 0.23 0.13 mdi * elasticsearch-2
10.250.5.68 30 44 21 0.47 0.15 0.07 mdi - elasticsearch-1

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.