I upgraded elasticsearch cluster 7.17 to 8.9 version.
I used "yum update elasticsearch" command to upgrade.
It is upgraded successfully.
But when i try to start elasticsearch, it couldn't start.
How can i solve it?
enviroment and spec:
- cent os 7
- 3 node cluster
log of node1
[2023-08-22T15:49:18,591][INFO ][o.e.n.Node ] [node1] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2023-08-22T15:49:18,592][INFO ][o.e.n.Node ] [node1] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=org.elasticsearch.preallocate, -Xms2g, -Xmx2g, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-2391729659593485720, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=1073741824, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, --add-modules=org.elasticsearch.preallocate, -Djdk.module.main=org.elasticsearch.server]
[2023-08-22T15:49:21,631][INFO ][o.e.p.PluginsService ] [node1] loaded module [repository-url]
[2023-08-22T15:49:21,632][INFO ][o.e.p.PluginsService ] [node1] loaded module [rest-root]
...
[2023-08-22T15:49:21,662][INFO ][o.e.p.PluginsService ] [node1] loaded module [lang-expression]
[2023-08-22T15:49:21,662][INFO ][o.e.p.PluginsService ] [node1] loaded module [x-pack-eql]
[2023-08-22T15:49:24,340][INFO ][o.e.e.NodeEnvironment ] [node1] using [1] data paths, mounts [[/ (/dev/mapper/centos-root)]], net usable_space [161.1gb], net total_space [175.4gb], types [xfs]
[2023-08-22T15:49:24,341][INFO ][o.e.e.NodeEnvironment ] [node1] heap size [2gb], compressed ordinary object pointers [true]
[2023-08-22T15:49:24,448][INFO ][o.e.n.Node ] [node1] node name [node1], node ID [sohFTzxjR6ibHtsiYmbixg], cluster name [cluster-test], roles [master, transform, data_content, data_hot, ingest, remote_cluster_client]
[2023-08-22T15:49:26,845][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node1] [controller/35317] [Main.cc@123] controller (64 bit): Version 8.9.1 (Build a285a437dd4bb2) Copyright (c) 2023 Elasticsearch BV
[2023-08-22T15:49:27,062][INFO ][o.e.x.s.Security ] [node1] Security is enabled
[2023-08-22T15:49:27,573][INFO ][o.e.x.s.a.s.FileRolesStore] [node1] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2023-08-22T15:49:28,380][INFO ][o.e.x.p.ProfilingPlugin ] [node1] Profiling is enabled
[2023-08-22T15:49:28,395][INFO ][o.e.x.p.ProfilingPlugin ] [node1] profiling index templates will not be installed or reinstalled
[2023-08-22T15:49:29,150][INFO ][o.e.t.n.NettyAllocator ] [node1] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2023-08-22T15:49:29,179][INFO ][o.e.i.r.RecoverySettings ] [node1] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2023-08-22T15:49:29,227][INFO ][o.e.d.DiscoveryModule ] [node1] using discovery type [multi-node] and seed hosts providers [settings]
[2023-08-22T15:49:30,603][INFO ][o.e.n.Node ] [node1] initialized
[2023-08-22T15:49:30,604][INFO ][o.e.n.Node ] [node1] starting ...
[2023-08-22T15:49:30,628][INFO ][o.e.x.s.c.f.PersistentCache] [node1] persistent cache index loaded
[2023-08-22T15:49:30,630][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [node1] deprecation component started
[2023-08-22T15:49:30,751][INFO ][o.e.t.TransportService ] [node1] publish_address {node1_ip:9300}, bound_addresses {node1_ip:9300}
[2023-08-22T15:49:34,633][INFO ][o.e.b.BootstrapChecks ] [node1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2023-08-22T15:49:34,655][WARN ][o.e.c.c.ClusterBootstrapService] [node1] this node is locked into cluster UUID [It0lymS4RFOpOaQLAmiucA] but [cluster.initial_master_nodes] is set to [node1, node2, node3]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts; for further information see https://www.elastic.co/guide/en/elasticsearch/reference/8.9/important-settings.html#initial_master_nodes
[2023-08-22T15:49:44,242][INFO ][o.e.c.s.ClusterApplierService] [node1] master node changed {previous [], current [{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}]}, added {{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}}, term: 63, version: 109734, reason: ApplyCommitRequest{term=63, version=109734, sourceNode={node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}{xpack.installed=true}}
[2023-08-22T15:49:44,262][INFO ][o.e.h.AbstractHttpServerTransport] [node1] publish_address {node1_ip:9200}, bound_addresses {node1_ip:9200}
[2023-08-22T15:49:44,263][INFO ][o.e.n.Node ] [node1] started {node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}{xpack.installed=true}
[2023-08-22T15:49:44,564][INFO ][o.e.c.s.ClusterSettings ] [node1] updating [cluster.routing.allocation.enable] from [all] to [primaries]
[2023-08-22T15:49:44,565][INFO ][o.e.c.s.ClusterSettings ] [node1] updating [cluster.routing.allocation.enable] from [all] to [primaries]
[2023-08-22T15:49:45,580][INFO ][o.e.l.ClusterStateLicenseService] [node1] license [958695bf-aef4-43ed-adf7-bedd35c829d0] mode [basic] - valid
[2023-08-22T15:49:45,582][INFO ][o.e.x.s.a.Realms ] [node1] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2023-08-22T15:49:46,994][INFO ][o.e.x.s.a.TokenService ] [node1] refresh keys
[2023-08-22T15:49:47,135][INFO ][o.e.x.s.a.TokenService ] [node1] refreshed keys
[2023-08-22T15:49:48,530][INFO ][o.e.c.s.ClusterApplierService] [node1] added {{node3}{7xYnr4deS6al7HHDYcdSFA}{YvYAfjVbROK0fDvO2kYg-g}{node3}{node3_ip}{node3_ip:9300}{cimrst}{8.9.1}}, term: 63, version: 109748, reason: ApplyCommitRequest{term=63, version=109748, sourceNode={node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}{xpack.installed=true}}
[2023-08-22T15:50:00,353][INFO ][o.e.x.s.a.RealmsAuthenticator] [node1] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
[2023-08-22T15:50:29,898][INFO ][o.e.n.Node ] [node1] stopping ...
[2023-08-22T15:50:29,901][INFO ][o.e.x.w.WatcherService ] [node1] stopping watch service, reason [shutdown initiated]
[2023-08-22T15:50:29,902][INFO ][o.e.x.w.WatcherLifeCycleService] [node1] watcher has stopped and shutdown
[2023-08-22T15:50:29,902][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node1] [controller/35317] [Main.cc@176] ML controller exiting
[2023-08-22T15:50:29,902][INFO ][o.e.x.m.p.NativeController] [node1] Native controller process has stopped - no new native processes can be started
[2023-08-22T15:50:30,377][INFO ][o.e.c.c.Coordinator ] [node1] master node [{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}] disconnected, restarting discovery
[2023-08-22T15:50:30,463][INFO ][o.e.n.Node ] [node1] stopped
[2023-08-22T15:50:30,464][INFO ][o.e.n.Node ] [node1] closing ...
[2023-08-22T15:50:30,477][INFO ][o.e.n.Node ] [node1] closed
log of node2
[2023-08-22T15:49:25,053][INFO ][o.e.n.Node ] [node2] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=org.elasticsearch.preallocate, -Xms2g, -Xmx2g, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-9903358777357106779, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=1073741824, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, --add-modules=org.elasticsearch.preallocate, -Djdk.module.main=org.elasticsearch.server]
[2023-08-22T15:49:28,364][INFO ][o.e.p.PluginsService ] [node2] loaded module [repository-url]
[2023-08-22T15:49:28,365][INFO ][o.e.p.PluginsService ] [node2] loaded module [rest-root]
...
[2023-08-22T15:49:28,388][INFO ][o.e.p.PluginsService ] [node2] loaded module [vector-tile]
[2023-08-22T15:49:28,388][INFO ][o.e.p.PluginsService ] [node2] loaded module [lang-expression]
[2023-08-22T15:49:28,389][INFO ][o.e.p.PluginsService ] [node2] loaded module [x-pack-eql]
[2023-08-22T15:49:31,323][INFO ][o.e.e.NodeEnvironment ] [node2] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [25.4gb], net total_space [175.4gb], types [rootfs]
[2023-08-22T15:49:31,324][INFO ][o.e.e.NodeEnvironment ] [node2] heap size [2gb], compressed ordinary object pointers [true]
[2023-08-22T15:49:31,534][INFO ][o.e.n.Node ] [node2] node name [node2], node ID [d3Bbx0CMTVytkUUaRdAoSw], cluster name [cluster-test], roles [master, transform, data_warm, data_content, ingest, remote_cluster_client]
[2023-08-22T15:49:34,503][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node2] [controller/1060] [Main.cc@123] controller (64 bit): Version 8.9.1 (Build a285a437dd4bb2) Copyright (c) 2023 Elasticsearch BV
[2023-08-22T15:49:34,760][INFO ][o.e.x.s.Security ] [node2] Security is enabled
[2023-08-22T15:49:35,286][INFO ][o.e.x.s.a.s.FileRolesStore] [node2] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2023-08-22T15:49:36,075][INFO ][o.e.x.p.ProfilingPlugin ] [node2] Profiling is enabled
[2023-08-22T15:49:36,091][INFO ][o.e.x.p.ProfilingPlugin ] [node2] profiling index templates will not be installed or reinstalled
[2023-08-22T15:49:36,914][INFO ][o.e.t.n.NettyAllocator ] [node2] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2023-08-22T15:49:36,947][INFO ][o.e.i.r.RecoverySettings ] [node2] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2023-08-22T15:49:37,001][INFO ][o.e.d.DiscoveryModule ] [node2] using discovery type [multi-node] and seed hosts providers [settings]
[2023-08-22T15:49:38,493][INFO ][o.e.n.Node ] [node2] initialized
[2023-08-22T15:49:38,495][INFO ][o.e.n.Node ] [node2] starting ...
[2023-08-22T15:49:38,529][INFO ][o.e.x.s.c.f.PersistentCache] [node2] persistent cache index loaded
[2023-08-22T15:49:38,532][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [node2] deprecation component started
[2023-08-22T15:49:38,640][INFO ][o.e.t.TransportService ] [node2] publish_address {node2_ip:9300}, bound_addresses {node2_ip:9300}
[2023-08-22T15:49:43,213][INFO ][o.e.b.BootstrapChecks ] [node2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2023-08-22T15:49:43,237][WARN ][o.e.c.c.ClusterBootstrapService] [node2] this node is locked into cluster UUID [It0lymS4RFOpOaQLAmiucA] but [cluster.initial_master_nodes] is set to [node1, node2, node3]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts; for further information see https://www.elastic.co/guide/en/elasticsearch/reference/8.9/important-settings.html#initial_master_nodes
[2023-08-22T15:49:43,768][INFO ][o.e.c.s.MasterService ] [node2] elected-as-master ([2] nodes joined in term 63)[_FINISH_ELECTION_, {node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1} completing election, {node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1} completing election], term: 63, version: 109734, delta: master node changed {previous [], current [{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}]}, added {{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}}
[2023-08-22T15:49:44,261][INFO ][o.e.c.s.ClusterApplierService] [node2] master node changed {previous [], current [{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}]}, added {{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}}, term: 63, version: 109734, reason: Publication{term=63, version=109734}
[2023-08-22T15:49:44,303][INFO ][o.e.c.f.AbstractFileWatchingService] [node2] starting file watcher ...
[2023-08-22T15:49:44,310][INFO ][o.e.c.f.AbstractFileWatchingService] [node2] file settings service up and running [tid=58]
[2023-08-22T15:49:44,316][INFO ][o.e.c.c.NodeJoinExecutor ] [node2] node-join: [{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}] with reason [completing election]
[2023-08-22T15:49:44,316][INFO ][o.e.h.AbstractHttpServerTransport] [node2] publish_address {node2_ip:9200}, bound_addresses {node2_ip:9200}
[2023-08-22T15:49:44,317][INFO ][o.e.c.c.NodeJoinExecutor ] [node2] node-join: [{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}] with reason [completing election]
[2023-08-22T15:49:44,317][INFO ][o.e.n.Node ] [node2] started {node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}{xpack.installed=true}
[2023-08-22T15:49:45,592][INFO ][o.e.c.s.ClusterSettings ] [node2] updating [cluster.routing.allocation.enable] from [all] to [primaries]
[2023-08-22T15:49:45,592][INFO ][o.e.c.s.ClusterSettings ] [node2] updating [cluster.routing.allocation.enable] from [all] to [primaries]
[2023-08-22T15:49:46,630][INFO ][o.e.l.ClusterStateLicenseService] [node2] license [958695bf-aef4-43ed-adf7-bedd35c829d0] mode [basic] - valid
[2023-08-22T15:49:46,632][INFO ][o.e.x.s.a.Realms ] [node2] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2023-08-22T15:49:46,636][INFO ][o.e.g.GatewayService ] [node2] recovered [240] indices into cluster_state
[2023-08-22T15:49:47,876][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [node2] Node [{node2}{d3Bbx0CMTVytkUUaRdAoSw}] is selected as the current health node.
[2023-08-22T15:49:48,453][INFO ][o.e.c.s.MasterService ] [node2] node-join[{node3}{7xYnr4deS6al7HHDYcdSFA}{YvYAfjVbROK0fDvO2kYg-g}{node3}{node3_ip}{node3_ip:9300}{cimrst}{8.9.1} joining], term: 63, version: 109748, delta: added {{node3}{7xYnr4deS6al7HHDYcdSFA}{YvYAfjVbROK0fDvO2kYg-g}{node3}{node3_ip}{node3_ip:9300}{cimrst}{8.9.1}}
[2023-08-22T15:49:50,201][INFO ][o.e.c.s.ClusterApplierService] [node2] added {{node3}{7xYnr4deS6al7HHDYcdSFA}{YvYAfjVbROK0fDvO2kYg-g}{node3}{node3_ip}{node3_ip:9300}{cimrst}{8.9.1}}, term: 63, version: 109748, reason: Publication{term=63, version=109748}
[2023-08-22T15:49:50,215][INFO ][o.e.c.c.NodeJoinExecutor ] [node2] node-join: [{node3}{7xYnr4deS6al7HHDYcdSFA}{YvYAfjVbROK0fDvO2kYg-g}{node3}{node3_ip}{node3_ip:9300}{cimrst}{8.9.1}] with reason [joining]
[2023-08-22T15:49:50,244][INFO ][o.e.c.r.a.DiskThresholdMonitor] [node2] low disk watermark [85%] exceeded on [d3Bbx0CMTVytkUUaRdAoSw][node2][/data/elasticsearch] free: 25.4gb[14.4%], replicas will not be assigned to this node
[2023-08-22T15:50:00,904][INFO ][o.e.c.r.a.AllocationService] [node2] current.health="YELLOW" message="Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[parse-error-weblog][0]]])." previous.health="RED" reason="shards started [[parse-error-weblog][0]]"
[2023-08-22T15:50:12,295][INFO ][o.e.x.s.a.RealmsAuthenticator] [node2] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
[2023-08-22T15:50:30,379][INFO ][o.e.t.ClusterConnectionManager] [node2] transport connection to [{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}] closed by remote
[2023-08-22T15:50:30,387][INFO ][o.e.c.r.a.AllocationService] [node2] current.health="RED" message="Cluster health status changed from [YELLOW] to [RED] (reason: [{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1} reason: disconnected])." previous.health="YELLOW" reason="{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1} reason: disconnected"
[2023-08-22T15:50:30,401][INFO ][o.e.c.s.MasterService ] [node2] node-left[{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1} reason: disconnected], term: 63, version: 109902, delta: removed {{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}}
[2023-08-22T15:50:30,498][INFO ][o.e.c.s.ClusterApplierService] [node2] removed {{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}}, term: 63, version: 109902, reason: Publication{term=63, version=109902}
[2023-08-22T15:50:30,511][INFO ][o.e.c.r.DelayedAllocationService] [node2] scheduling reroute for delayed shards in [59.8s] (39 delayed shards)
[2023-08-22T15:50:30,514][INFO ][o.e.c.c.NodeLeftExecutor ] [node2] node-left: [{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}] with reason [disconnected]
[2023-08-22T15:50:35,639][INFO ][o.e.n.Node ] [node2] stopping ...
[2023-08-22T15:50:35,640][INFO ][o.e.c.f.AbstractFileWatchingService] [node2] shutting down watcher thread
[2023-08-22T15:50:35,641][INFO ][o.e.c.f.AbstractFileWatchingService] [node2] watcher service stopped
[2023-08-22T15:50:35,643][INFO ][o.e.x.w.WatcherService ] [node2] stopping watch service, reason [shutdown initiated]
[2023-08-22T15:50:35,644][INFO ][o.e.x.w.WatcherLifeCycleService] [node2] watcher has stopped and shutdown
[2023-08-22T15:50:35,645][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node2] [controller/1060] [Main.cc@176] ML controller exiting
[2023-08-22T15:50:35,646][INFO ][o.e.x.m.p.NativeController] [node2] Native controller process has stopped - no new native processes can be started
[2023-08-22T15:50:36,146][INFO ][o.e.n.Node ] [node2] stopped
[2023-08-22T15:50:36,147][INFO ][o.e.n.Node ] [node2] closing ...
[2023-08-22T15:50:36,166][INFO ][o.e.n.Node ] [node2] closed
log of node3
[2023-08-22T15:49:29,632][INFO ][o.e.n.Node ] [node3] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=org.elasticsearch.preallocate, -Xms2g, -Xmx2g, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-17777182924067032764, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=1073741824, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, --add-modules=org.elasticsearch.preallocate, -Djdk.module.main=org.elasticsearch.server]
[2023-08-22T15:49:33,800][INFO ][o.e.p.PluginsService ] [node3] loaded module [repository-url]
...
[2023-08-22T15:49:33,818][INFO ][o.e.p.PluginsService ] [node3] loaded module [vector-tile]
[2023-08-22T15:49:33,818][INFO ][o.e.p.PluginsService ] [node3] loaded module [lang-expression]
[2023-08-22T15:49:33,818][INFO ][o.e.p.PluginsService ] [node3] loaded module [x-pack-eql]
[2023-08-22T15:49:36,559][INFO ][o.e.e.NodeEnvironment ] [node3] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [44.5gb], net total_space [175.4gb], types [rootfs]
[2023-08-22T15:49:36,559][INFO ][o.e.e.NodeEnvironment ] [node3] heap size [2gb], compressed ordinary object pointers [true]
[2023-08-22T15:49:36,744][INFO ][o.e.n.Node ] [node3] node name [node3], node ID [7xYnr4deS6al7HHDYcdSFA], cluster name [cluster-test], roles [transform, data_content, ingest, data_cold, remote_cluster_client, master]
[2023-08-22T15:49:39,333][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node3] [controller/47047] [Main.cc@123] controller (64 bit): Version 8.9.1 (Build a285a437dd4bb2) Copyright (c) 2023 Elasticsearch BV
[2023-08-22T15:49:39,577][INFO ][o.e.x.s.Security ] [node3] Security is enabled
[2023-08-22T15:49:40,208][INFO ][o.e.x.s.a.s.FileRolesStore] [node3] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2023-08-22T15:49:41,045][INFO ][o.e.x.p.ProfilingPlugin ] [node3] Profiling is enabled
[2023-08-22T15:49:41,060][INFO ][o.e.x.p.ProfilingPlugin ] [node3] profiling index templates will not be installed or reinstalled
[2023-08-22T15:49:41,847][INFO ][o.e.t.n.NettyAllocator ] [node3] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2023-08-22T15:49:41,878][INFO ][o.e.i.r.RecoverySettings ] [node3] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2023-08-22T15:49:41,931][INFO ][o.e.d.DiscoveryModule ] [node3] using discovery type [multi-node] and seed hosts providers [settings]
[2023-08-22T15:49:43,435][INFO ][o.e.n.Node ] [node3] initialized
[2023-08-22T15:49:43,436][INFO ][o.e.n.Node ] [node3] starting ...
[2023-08-22T15:49:43,475][INFO ][o.e.x.s.c.f.PersistentCache] [node3] persistent cache index loaded
[2023-08-22T15:49:43,477][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [node3] deprecation component started
[2023-08-22T15:49:43,580][INFO ][o.e.t.TransportService ] [node3] publish_address {node3_ip:9300}, bound_addresses {node3_ip:9300}
[2023-08-22T15:49:47,478][INFO ][o.e.b.BootstrapChecks ] [node3] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2023-08-22T15:49:47,514][WARN ][o.e.c.c.ClusterBootstrapService] [node3] this node is locked into cluster UUID [It0lymS4RFOpOaQLAmiucA] but [cluster.initial_master_nodes] is set to [node1, node2, node3]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts; for further information see https://www.elastic.co/guide/en/elasticsearch/reference/8.9/important-settings.html#initial_master_nodes
[2023-08-22T15:49:48,857][INFO ][o.e.c.s.ClusterApplierService] [node3] master node changed {previous [], current [{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}]}, added {{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}, {node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}}, term: 63, version: 109748, reason: ApplyCommitRequest{term=63, version=109748, sourceNode={node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}{xpack.installed=true}}
[2023-08-22T15:49:48,919][INFO ][o.e.c.s.ClusterSettings ] [node3] updating [cluster.routing.allocation.enable] from [all] to [primaries]
[2023-08-22T15:49:48,920][INFO ][o.e.c.s.ClusterSettings ] [node3] updating [cluster.routing.allocation.enable] from [all] to [primaries]
[2023-08-22T15:49:49,999][INFO ][o.e.x.s.a.TokenService ] [node3] refresh keys
[2023-08-22T15:49:50,142][INFO ][o.e.x.s.a.TokenService ] [node3] refreshed keys
[2023-08-22T15:49:50,190][INFO ][o.e.l.ClusterStateLicenseService] [node3] license [958695bf-aef4-43ed-adf7-bedd35c829d0] mode [basic] - valid
[2023-08-22T15:49:50,191][INFO ][o.e.x.s.a.Realms ] [node3] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2023-08-22T15:49:50,202][INFO ][o.e.h.AbstractHttpServerTransport] [node3] publish_address {node3_ip:9200}, bound_addresses {node3_ip:9200}
[2023-08-22T15:49:50,203][INFO ][o.e.n.Node ] [node3] started {node3}{7xYnr4deS6al7HHDYcdSFA}{YvYAfjVbROK0fDvO2kYg-g}{node3}{node3_ip}{node3_ip:9300}{cimrst}{8.9.1}{xpack.installed=true}
[2023-08-22T15:49:55,938][INFO ][o.e.x.s.a.RealmsAuthenticator] [node3] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
[2023-08-22T15:49:57,548][INFO ][o.e.m.j.JvmGcMonitorService] [node3] [gc][14] overhead, spent [258ms] collecting in the last [1s]
[2023-08-22T15:50:30,378][INFO ][o.e.t.ClusterConnectionManager] [node3] transport connection to [{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}] closed by remote
[2023-08-22T15:50:30,489][INFO ][o.e.c.s.ClusterApplierService] [node3] removed {{node1}{sohFTzxjR6ibHtsiYmbixg}{mzLZJz3eREOTDyaNT2m8sQ}{node1}{node1_ip}{node1_ip:9300}{himrst}{8.9.1}}, term: 63, version: 109902, reason: ApplyCommitRequest{term=63, version=109902, sourceNode={node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}{xpack.installed=true}}
[2023-08-22T15:50:36,069][INFO ][o.e.t.ClusterConnectionManager] [node3] transport connection to [{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}] closed by remote
[2023-08-22T15:50:36,070][INFO ][o.e.c.c.Coordinator ] [node3] master node [{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}] disconnected, restarting discovery
[2023-08-22T15:50:36,072][INFO ][o.e.c.s.ClusterApplierService] [node3] master node changed {previous [{node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}], current []}, term: 63, version: 109905, reason: becoming candidate: onLeaderFailure
[2023-08-22T15:50:36,078][WARN ][o.e.c.NodeConnectionsService] [node3] failed to connect to {node2}{d3Bbx0CMTVytkUUaRdAoSw}{MqXZxhc-SlGvwUGnjaE2zg}{node2}{node2_ip}{node2_ip:9300}{imrstw}{8.9.1}{xpack.installed=true} (tried [1] times)
org.elasticsearch.transport.ConnectTransportException: [node2][node2_ip:9300] connect_exception
at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1144) ~[elasticsearch-8.9.1.jar:?]
at org.elasticsearch.action.support.SubscribableListener$FailureResult.complete(SubscribableListener.java:285) ~[elasticsearch-8.9.1.jar:?]
at org.elasticsearch.action.support.SubscribableListener.tryComplete(SubscribableListener.java:197) ~[elasticsearch-8.9.1.jar:?]
at org.elasticsearch.action.support.SubscribableListener.setResult(SubscribableListener.java:222) ~[elasticsearch-8.9.1.jar:?]
at org.elasticsearch.action.support.SubscribableListener.onFailure(SubscribableListener.java:141) ~[elasticsearch-8.9.1.jar:?]
at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addListener$0(Netty4TcpChannel.java:61) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) ~[?:?]
at org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport$ClientSslHandlerInitializer.lambda$connect$1(SecurityNetty4Transport.java:289) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) ~[?:?]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) ~[?:?]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
at java.lang.Thread.run(Thread.java:1623) ~[?:?]
Caused by: org.elasticsearch.common.util.concurrent.UncategorizedExecutionException: Failed execution
at org.elasticsearch.action.support.SubscribableListener.wrapAsExecutionException(SubscribableListener.java:178) ~[elasticsearch-8.9.1.jar:?]
at org.elasticsearch.common.util.concurrent.ListenableFuture.wrapException(ListenableFuture.java:38) ~[elasticsearch-8.9.1.jar:?]
at org.elasticsearch.common.util.concurrent.ListenableFuture.wrapException(ListenableFuture.java:27) ~[elasticsearch-8.9.1.jar:?]
... 26 more
Caused by: java.util.concurrent.ExecutionException: io.netty.channel.AbstractChannel$AnnotatedConnectException: 연결이 거부됨: node2_ip/node2_ip:9300
at org.elasticsearch.action.support.SubscribableListener.wrapAsExecutionException(SubscribableListener.java:178) ~[elasticsearch-8.9.1.jar:?]
at org.elasticsearch.common.util.concurrent.ListenableFuture.wrapException(ListenableFuture.java:38) ~[elasticsearch-8.9.1.jar:?]
at org.elasticsearch.common.util.concurrent.ListenableFuture.wrapException(ListenableFuture.java:27) ~[elasticsearch-8.9.1.jar:?]
... 26 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: 연결이 거부됨: node2_ip/node2_ip:9300
Caused by: java.net.ConnectException: 연결이 거부됨
at sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]
at sun.nio.ch.Net.pollConnectNow(Net.java:673) ~[?:?]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:973) ~[?:?]
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) ~[?:?]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[?:?]
... 7 more
[2023-08-22T15:50:40,247][INFO ][o.e.x.s.a.RealmsAuthenticator] [node3] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
[2023-08-22T15:50:40,331][INFO ][o.e.n.Node ] [node3] stopping ...
[2023-08-22T15:50:40,334][INFO ][o.e.x.w.WatcherService ] [node3] stopping watch service, reason [shutdown initiated]
[2023-08-22T15:50:40,335][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node3] [controller/47047] [Main.cc@176] ML controller exiting
[2023-08-22T15:50:40,337][INFO ][o.e.x.w.WatcherLifeCycleService] [node3] watcher has stopped and shutdown
[2023-08-22T15:50:40,337][INFO ][o.e.x.m.p.NativeController] [node3] Native controller process has stopped - no new native processes can be started
[2023-08-22T15:50:40,936][INFO ][o.e.n.Node ] [node3] stopped
[2023-08-22T15:50:40,936][INFO ][o.e.n.Node ] [node3] closing ...
[2023-08-22T15:50:40,948][INFO ][o.e.n.Node ] [node3] closed