ElasticSearch Instance is going down periodically with no errors in logs

We have an application in Liferay portal and it is using Elasticsearch service as remote server.

Elasticsearch on a remote server is going down periodically even though the memory , CPU are highly available.
Currently We have RAM of size 32 GB and 8 GB allocated to Elasticsearch.

Use-Case:
One of the worker node drops from the cluster and service is killed eventually all the ES servers are going down with out any error info.

We don't see any error info in the logs to troubleshoot and only info is "2022-10-20T09:04:59,117][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]"

Please provide us assistance on it.

Welcome to our community! :smiley:

Please share as much logs as you can. Please also format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

Master Node Logs:

[2022-10-20T09:04:58,943][INFO ][o.e.c.s.ClusterApplierService] [elasticsearch-1] removed {{elasticsearch-3}{AWs8jYiPR7u4jq_7duCQ9Q}{7wNFTeFyT-2926ZLRizZBw}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505222656, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}, reason: apply cluster state (from master [master {elasticsearch-2}{eRxN_IvaSfSQ82zeaKw0LQ}{YELeTWyVS3u1h-hWXB0FrA}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505226752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} committed version [9]])
[2022-10-20T09:04:59,117][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,134][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,185][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,199][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,329][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,358][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,393][INFO ][o.e.d.z.ZenDiscovery     ] [elasticsearch-1] master_left [{elasticsearch-2}{eRxN_IvaSfSQ82zeaKw0LQ}{YELeTWyVS3u1h-hWXB0FrA}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505226752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}], reason [shut_down]
[2022-10-20T09:04:59,393][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,393][WARN ][o.e.d.z.ZenDiscovery     ] [elasticsearch-1] master left (reason = shut_down), current nodes: nodes:
   {elasticsearch-1}{sZD70v3nSgK9y6MsfFvB2A}{6huHryUASNS8_z561jCgEA}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505226752, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, local
   {elasticsearch-2}{eRxN_IvaSfSQ82zeaKw0LQ}{YELeTWyVS3u1h-hWXB0FrA}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505226752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, master

[2022-10-20T09:04:59,423][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,472][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,578][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,595][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:04:59,649][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,354][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,371][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,383][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,407][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,422][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,446][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,768][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,784][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,799][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,812][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,823][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:00,835][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:01,314][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:01,330][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:01,342][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:01,358][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:01,369][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:01,385][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-1] All shards failed for phase: [query]
[2022-10-20T09:05:02,396][WARN ][o.e.d.z.ZenDiscovery     ] [elasticsearch-1] not enough master nodes discovered during pinging (found [[Candidate{node={elasticsearch-1}{sZD70v3nSgK9y6MsfFvB2A}{6huHryUASNS8_z561jCgEA}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505226752, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=10}]], butneeded [2]), pinging again
[2022-10-20T09:05:02,412][WARN ][o.e.c.NodeConnectionsService] [elasticsearch-1] failed to connect to node {elasticsearch-2}{eRxN_IvaSfSQ82zeaKw0LQ}{YELeTWyVS3u1h-hWXB0FrA}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505226752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} (tried [1] times)
org.elasticsearch.transport.ConnectTransportException: [elasticsearch-2][*.*.*.*:9305] connect_exception
        at org.elasticsearch.transport.TcpChannel.awaitConnected(TcpChannel.java:165) ~[elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:454) ~[elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:117) ~[elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.transport.ConnectionManager.internalOpenConnection(ConnectionManager.java:237) ~[elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.transport.ConnectionManager.connectToNode(ConnectionManager.java:119) ~[elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:369) ~[elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:356) ~[elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.cluster.NodeConnectionsService.validateAndConnectIfNeeded(NodeConnectionsService.java:153) [elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.cluster.NodeConnectionsService$ConnectionChecker.doRun(NodeConnectionsService.java:180) [elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:723) [elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.5.1.jar:6.5.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_342]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_342]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_342]
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: *.*.*.*/*.*.*.*:9305
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) ~[?:?]
        at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327) ~[?:?]
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) ~[?:?]
        ... 1 more
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) ~[?:?]
        at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327) ~[?:?]
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) ~[?:?]
        ... 1 more
[2022-10-20T09:05:05,211][INFO ][o.e.x.m.j.p.NativeController] [elasticsearch-1] Native controller process has stopped - no new native processes can be started
[2022-10-20T09:05:05,212][INFO ][o.e.n.Node               ] [elasticsearch-1] stopping ...
[2022-10-20T09:05:05,232][WARN ][o.e.d.z.ZenDiscovery     ] [elasticsearch-1] not enough master nodes discovered during pinging (found [[Candidate{node={elasticsearch-1}{sZD70v3nSgK9y6MsfFvB2A}{6huHryUASNS8_z561jCgEA}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505226752, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=10}]], butneeded [2]), pinging again
[2022-10-20T09:05:05,232][INFO ][o.e.x.w.WatcherService   ] [elasticsearch-1] stopping watch service, reason [shutdown initiated]
[2022-10-20T09:05:05,518][INFO ][o.e.n.Node               ] [elasticsearch-1] stopped
[2022-10-20T09:05:05,518][INFO ][o.e.n.Node               ] [elasticsearch-1] closing ...
[2022-10-20T09:05:05,528][INFO ][o.e.n.Node               ] [elasticsearch-1] closed

Worker Node Logs:

[2022-10-20T01:44:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] [elasticsearch-2] triggering scheduled [ML] maintenance tasks
[2022-10-20T01:44:00,002][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [elasticsearch-2] Deleting expired data
[2022-10-20T01:44:00,004][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [elasticsearch-2] Completed deletion of expired data
[2022-10-20T01:44:00,004][INFO ][o.e.x.m.MlDailyMaintenanceService] [elasticsearch-2] Successfully completed [ML] maintenance tasks
[2022-10-20T09:04:58,939][INFO ][o.e.c.s.MasterService    ] [elasticsearch-2] zen-disco-node-left({elasticsearch-3}{AWs8jYiPR7u4jq_7duCQ9Q}{7wNFTeFyT-2926ZLRizZBw}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505222656, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}), reason(left)[{elasticsearch-3}{AWs8jYiPR7u4jq_7duCQ9Q}{7wNFTeFyT-2926ZLRizZBw}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505222656, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} left], reason: removed {{elasticsearch-3}{AWs8jYiPR7u4jq_7duCQ9Q}{7wNFTeFyT-2926ZLRizZBw}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505222656, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}
[2022-10-20T09:04:58,958][INFO ][o.e.c.s.ClusterApplierService] [elasticsearch-2] removed {{elasticsearch-3}{AWs8jYiPR7u4jq_7duCQ9Q}{7wNFTeFyT-2926ZLRizZBw}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505222656, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}, reason: apply cluster state (from master [master {elasticsearch-2}{eRxN_IvaSfSQ82zeaKw0LQ}{YELeTWyVS3u1h-hWXB0FrA}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505226752, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [9] source [zen-disco-node-left({elasticsearch-3}{AWs8jYiPR7u4jq_7duCQ9Q}{7wNFTeFyT-2926ZLRizZBw}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505222656, ml.max_open_jobs=20, xpack.installed=true,ml.enabled=true}), reason(left)[{elasticsearch-3}{AWs8jYiPR7u4jq_7duCQ9Q}{7wNFTeFyT-2926ZLRizZBw}{*.*.*.*}{*.*.*.*:9305}{ml.machine_memory=33505222656, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} left]]])
[2022-10-20T09:04:58,970][INFO ][o.e.c.r.DelayedAllocationService] [elasticsearch-2] scheduling reroute for delayed shards in [59.9s] (1 delayed shards)
[2022-10-20T09:04:59,030][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-2] All shards failed for phase: [query]
[2022-10-20T09:04:59,162][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-2] All shards failed for phase: [query]
[2022-10-20T09:04:59,294][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-2] All shards failed for phase: [query]
[2022-10-20T09:04:59,375][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-2] All shards failed for phase: [query]
[2022-10-20T09:04:59,378][INFO ][o.e.x.m.j.p.NativeController] [elasticsearch-2] Native controller process has stopped - no new native processes can be started
[2022-10-20T09:04:59,379][INFO ][o.e.n.Node               ] [elasticsearch-2] stopping ...
[2022-10-20T09:04:59,392][INFO ][o.e.x.w.WatcherService   ] [elasticsearch-2] stopping watch service, reason [shutdown initiated]
[2022-10-20T09:04:59,555][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-2] All shards failed for phase: [query]
[2022-10-20T09:04:59,625][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-2] All shards failed for phase: [query]
[2022-10-20T09:04:59,675][INFO ][o.e.n.Node               ] [elasticsearch-2] stopped
[2022-10-20T09:04:59,675][INFO ][o.e.n.Node               ] [elasticsearch-2] closing ...
[2022-10-20T09:04:59,690][INFO ][o.e.n.Node               ] [elasticsearch-2] closed

I have attached the logs, Let us know if any additional info required.

It looks like something is asking Elasticsearch to stop, can you check your OS logs to see if there is anything relevant?

I see below activities before the elastic-search service was down:

[11:53 AM] Vadde, Bharath
ct 20 09:02:33 ip-10-116-97-156 dracut[101247]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Oct 20 09:02:33 ip-10-116-97-156 dracut[101247]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: memstrack is available
Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: bash ***
Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: systemd ***Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: systemd-initrd ***Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: nss-softokn ***
Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: rngd ***Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: i18n ***Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: network-manager ***
Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: network ***Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: ifcfg ***Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: prefixdevname ***
Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: crypt ***
Oct 20 09:02:34 ip-10-116-97-156 dracut[101247]: *** Including module: dm ***Oct 20 09:02:35 ip-10-116-97-156 dracut[101247]: Skipping udev rule: 64-device-mapper.rulesOct 20 09:02:35 ip-10-116-97-156 dracut[101247]: Skipping udev rule: 60-persistent-storage-dm.rulesOct 20 09:02:35 ip-10-116-97-156 dracut[101247]: Skipping udev rule: 55-dm.rulesOct 20 09:02:35 ip-10-116-97-156 dracut[101247]: *** Including module: kernel-modules ***Oct 20 09:02:39 ip-10-116-97-156 dracut[101247]: *** Including module: kernel-modules-extra ***Oct 20 09:02:39 ip-10-116-97-156 dracut[101247]: *** Including module: kernel-network-modules ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: qemu ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: qemu-net ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: lunmask ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: resume ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: rootfs-block ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: terminfo ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: udev-rules ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: Skipping udev rule: 91-permissions.rules
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: Skipping udev rule: 80-drivers-modprobe.rules
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: dracut-systemd ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: usrmount ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: base ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: fs-lib ***
Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: memstrack ***Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]: *** Including module: microcode_ctl-fw_dir_override ***Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]:  microcode_ctl module: mangling fw_dirOct 20 09:02:40 ip-10-116-97-156 dracut[101247]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...Oct 20 09:02:40 ip-10-116-97-156 dracut[101247]:      microcode_ctl: intel: caveats check for kernel version "4.18.0-372.26.1.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variableOct 20 09:02:40 ip-10-116-97-156 dracut[101247]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]:      microcode_ctl: intel-06-2d-07: caveats check for kernel version "4.18.0-372.26.1.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07" to fw_dir variableOct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: kernel version "4.18.0-372.26.1.el8_6.x86_64" failed early load check for "intel-06-4e-03", skippingct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: kernel version "4.18.0-372.26.1.el8_6.x86_64" failed early load check for "intel-06-4e-03", skippingOct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: kernel version "4.18.0-372.26.1.el8_6.x86_64" failed early load check for "intel-06-4f-01", skippingOct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]:      microcode_ctl: intel-06-55-04: caveats check for kernel version "4.18.0-372.26.1.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04" to fw_dir variableOct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]:      microcode_ctl: intel-06-5e-03: caveats check for kernel version "4.18.0-372.26.1.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03" to fw_dir variableOct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]:      microcode_ctl: intel-06-8c-01: caveats check for kernel version "4.18.0-372.26.1.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01" to fw_dir variableOct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: kernel version "4.18.0-372.26.1.el8_6.x86_64" failed early load check for "intel-06-8e-9e-0x-0xca", skippingOct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]:      microcode_ctl: intel-06-8e-9e-0x-dell: caveats check for kernel version "4.18.0-372.26.1.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell" to fw_dir variableOct 20 09:02:41 ip-10-116-97-156 dracut[101247]:    microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell /usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07 /usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates /lib/firmware"
Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]: *** Including module: shutdown ***
Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]: *** Including modules done ***
Oct 20 09:02:41 ip-10-116-97-156 dracut[101247]: *** Installing kernel module dependencies ***
Oct 20 09:02:43 ip-10-116-97-156 dracut[101247]: *** Installing kernel module dependencies done ***
Oct 20 09:02:43 ip-10-116-97-156 dracut[101247]: *** Resolving executable dependencies ***
Oct 20 09:02:44 ip-10-116-97-156 dracut[101247]: *** Resolving executable dependencies done***
Oct 20 09:02:44 ip-10-116-97-156 dracut[101247]: *** Hardlinking files ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Hardlinking files done ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: Could not find 'strip'. Not stripping the initramfs.
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Generating early-microcode cpio image ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Constructing AuthenticAMD.bin ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Constructing GenuineIntel.bin ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Constructing GenuineIntel.bin ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Constructing GenuineIntel.bin ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Constructing GenuineIntel.bin ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Constructing GenuineIntel.bin ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Constructing GenuineIntel.bin ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Constructing GenuineIntel.bin ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Store current command line parameters ***
Oct 20 09:02:45 ip-10-116-97-156 dracut[101247]: *** Creating image file '/boot/initramfs-4.18.0-372.26.1.el8_6.x86_64.img' ***
Oct 20 09:03:03 ip-10-116-97-156 dracut[101247]: *** Creating initramfs image file '/boot/initramfs-4.18.0-372.26.1.el8_6.x86_64.img' done ***
Oct 20 09:03:16 ip-10-116-97-156 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 20 09:03:16 ip-10-116-97-156 systemd[1]: Starting man-db-cache-update.service...
Oct 20 09:03:17 ip-10-116-97-156 systemd[1]: Reloading.
Oct 20 09:03:17 ip-10-116-97-156 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 20 09:03:17 ip-10-116-97-156 systemd[1]: man-db-cache-update.service: Succeeded.
Oct 20 09:03:17 ip-10-116-97-156 systemd[1]: Started man-db-cache-update.service.
Oct 20 09:03:17 ip-10-116-97-156 systemd[1]: run-r7cf904c4adc74f959f583a95f9333572.service: Succeeded.
Oct 20 09:03:17 ip-10-116-97-156 systemd[1]: run-r8a59fa3e7ca64daf97321e48c6788041.service: Succeeded.
Oct 20 09:03:27 ip-10-116-97-156 auditd[724]: Audit daemon rotating log files
Oct 20 09:03:28 ip-10-116-97-156 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 20 09:03:28 ip-10-116-97-156 systemd[1]: Starting man-db-cache-update.service...
Oct 20 09:03:28 ip-10-116-97-156 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 20 09:03:28 ip-10-116-97-156 systemd[1]: man-db-cache-update.service: Succeeded.
Oct 20 09:03:28 ip-10-116-97-156 systemd[1]: Started man-db-cache-update.service.
Oct 20 09:03:28 ip-10-116-97-156 systemd[1]: run-r1e5982645d244af58af0a4c4b49203e6.service: Succeeded.
Oct 20 09:03:28 ip-10-116-97-156 systemd[1]: run-re42e7d86cb1b4348a5e8bc146a6c82e5.service: Succeeded.
Oct 20 09:03:59 ip-10-116-97-156 systemd-logind[856]: Creating /run/nologin, blocking further logins...
Oct 20 09:05:38 ip-10-116-97-156 kernel: Command line: BOOT_IMAGE=(hd0,gpt2)/boot/vmlinuz-4.18.0-372.26.1.el8_6.x86_64 root=UUID=949779ce-46aa-434e-8eb0-852514a5d69e ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto
Oct 20 09:05:38 ip-10-116-97-156 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 20 09:05:38 ip-10-116-97-156 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 20 09:05:38 ip-10-116-97-156 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'Oct 20 09:05:38 ip-10-116-97-156 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 20 09:05:38 ip-10-116-97-156 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Oct 20 09:05:38 ip-10-116-97-156 kernel: signal: max sigframe size: 1776

It looks that your instance may be rebooted, what is the uptime for it?

Where are you running it? Are you running in AWS? Is it something like ec2 spot instances?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.