Node shutdown itself. How to find out why

Hello,

as the title suggests our test node shutdown itself. We came back from the weekend and the service was stopped. I searched through the logs but did not really find out any reason. The log just ends abruptly. I checked if the system rebooted but no, the last reboot was 1 week ago, the shutdown was 3 days ago.

Log
[2022-08-06T06:09:45,655][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][413256][608] duration [1.5s], collections [1]/[2.7s], total [1.5s]/[16.7s], memory [4.9gb]->[336mb]/[7.6gb], all_pools {[young] [4.5gb]->[0b]/[0b]}{[old] [329.3mb]->[329.6mb]/[7.6gb]}{[survivor] [7.6mb]->[6.3mb]/[0b]}
[2022-08-06T06:09:46,159][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][413256] overhead, spent [1.5s] collecting in the last [2.7s]
[2022-08-06T06:10:43,846][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][413311][609] duration [1s], collections [1]/[1.6s], total [1s]/[17.7s], memory [712mb]->[336.7mb]/[7.6gb], all_pools {[young] [376mb]->[0b]/[0b]}{[old] [329.6mb]->[329.8mb]/[7.6gb]}{[survivor] [6.3mb]->[6.8mb]/[0b]}
[2022-08-06T06:10:45,505][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][413311] overhead, spent [1s] collecting in the last [1.6s]
[2022-08-06T06:11:23,573][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][413347][610] duration [1s], collections [1]/[2.4s], total [1s]/[18.8s], memory [700.7mb]->[337.1mb]/[7.6gb], all_pools {[young] [364mb]->[0b]/[0b]}{[old] [329.8mb]->[330.1mb]/[7.6gb]}{[survivor] [6.8mb]->[7mb]/[0b]}
[2022-08-06T06:11:24,360][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][413347] overhead, spent [1s] collecting in the last [2.4s]
[2022-08-06T06:12:27,034][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][413406][611] duration [1.4s], collections [1]/[2.8s], total [1.4s]/[20.3s], memory [713.1mb]->[336.6mb]/[7.6gb], all_pools {[young] [376mb]->[4mb]/[0b]}{[old] [330.1mb]->[330.3mb]/[7.6gb]}{[survivor] [7mb]->[6.2mb]/[0b]}
[2022-08-06T06:12:27,329][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][413406] overhead, spent [1.4s] collecting in the last [2.8s]
[2022-08-06T06:13:38,825][WARN ][o.e.t.ThreadPool         ] [node-1] execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor@dc4a5f5, interval=5s}] took [5001ms] which is above the warn threshold of [5000ms]
[2022-08-06T06:15:33,264][WARN ][o.e.t.ThreadPool         ] [node-1] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@72f9d65d, interval=1s}] took [5202ms] which is above the warn threshold of [5000ms]
[2022-08-06T06:16:49,138][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [24.9s/24992ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T06:24:10,241][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [24.9s/24992951180ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T06:17:21,565][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] handling request [null][POST][/_bulk][Netty4HttpChannel{localAddress=/172.22.23.142:9200, remoteAddress=/172.22.23.187:16886}] took [7028ms] which is above the warn threshold of [5000ms]
[2022-08-06T06:25:40,256][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [8.8m/532801ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T06:26:54,124][WARN ][o.e.c.s.MasterService    ] [node-1] took [11s/11053ms] to compute cluster state update for [ilm-set-step-info {policy [kibana-event-log-policy], index [.kibana-event-log-7.17.3-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@74222c04], ilm-set-step-info {policy [ilm-history-ilm-policy], index [.ds-ilm-history-5-2022.07.22-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@2f257fe5]], which exceeds the warn threshold of [10s]
[2022-08-06T06:26:10,914][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [8.8m/532800165879ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T06:27:16,300][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [2m/125487ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T06:30:43,804][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [2m/125487619482ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T06:31:21,884][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [4m/245296ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T06:33:22,771][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [4m/243928808664ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T06:40:10,413][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [8.7m/526448ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T06:43:30,910][WARN ][o.e.c.s.MasterService    ] [node-1] took [27.6s/27666ms] to notify listeners on unchanged cluster state for [ilm-set-step-info {policy [metricbeat], index [metricbeat-7.17.5-2022.07.22-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@84c549ab]], which exceeds the warn threshold of [10s]
[2022-08-06T06:59:59,943][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [8.7m/525340163894ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T07:08:33,201][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [28.3m/1701513ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T07:16:08,976][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [28.3m/1701467527988ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T07:06:57,608][WARN ][o.e.c.s.MasterService    ] [node-1] took [55.2s/55202ms] to compute cluster state update for [ilm-set-step-info {policy [kibana-event-log-policy], index [.kibana-event-log-7.17.3-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@74222c04], ilm-set-step-info {policy [ilm-history-ilm-policy], index [.ds-ilm-history-5-2022.07.22-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@2f257fe5], ilm-set-step-info {policy [.deprecation-indexing-ilm-policy], index [.ds-.logs-deprecation.elasticsearch-default-2022.07.22-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@831da92d]], which exceeds the warn threshold of [10s]
[2022-08-06T07:23:32,510][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [15m/900712ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T07:34:34,349][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [15m/902229100437ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T07:40:20,993][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [16.7m/1006196ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T07:47:43,220][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [16.7m/1004866090579ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T07:55:17,161][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [14.9m/897910ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T08:03:01,028][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [14.9m/898392746236ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T08:25:29,972][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [30.2m/1813474ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T08:40:15,113][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [30.2m/1813913021816ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T08:48:19,265][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [22.7m/1366652ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T08:58:56,368][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [22.7m/1365145312637ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T09:07:07,648][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [18.8m/1129880ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T09:16:16,100][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [18.8m/1130608382219ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T09:25:11,299][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [18m/1083046ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-08-06T09:25:09,390][WARN ][o.e.c.s.MasterService    ] [node-1] pending task queue has been nonempty for [43.3m/2603696ms] which is longer than the warn threshold of [300000ms]; there are currently [6] pending tasks, the oldest of which has age [56.1m/3370572ms]
[2022-08-06T09:34:09,552][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [18m/1083876349910ns] on relative clock which is above the warn threshold of [5000ms]
[2022-08-06T09:43:01,084][WARN ][o.e.t.ThreadPool         ] [node-1] timer thread slept for [17.8m/1069320ms] on absolute clock which is above the warn threshold of [5000ms]

Are there any other logs that I could look at to find any reasons? I am on ubuntu so maybe the OS has some logs.

Thanks for any ideas.

Is that all the log you have?

Intersting detail, when I tried to restart the node bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] was shown and the node stopped. I find that confusing because I already had set the max virtual memory ares to the desired number. Why did it reset this number? How can you prevent its reset? Maybe it reset itself and elastic crashed? I dont know.

To get to your logs. These are all the logs I have.

  1. This is the log which I already posted.
  2. These are the last archived logs, but they are pretty empty
testCluster-2022-08-06-1.log.gz

[2022-08-06T02:00:04,050][INFO ][o.e.c.m.MetadataCreateIndexService] [node-1] [filebeat-8.3.2-2022.08.06] creating index, cause [auto(bulk api)], templates [], shards [1]/[1] [2022-08-06T02:00:05,801][INFO ][o.e.c.m.MetadataMappingService] [node-1] [filebeat-8.3.2-2022.08.06/9YMrldi8T8qzOT5NdPaqMw] create_mapping [_doc] [2022-08-06T02:00:10,015][INFO ][o.e.c.m.MetadataMappingService] [node-1] [filebeat-8.3.2-2022.08.06/9YMrldi8T8qzOT5NdPaqMw] update_mapping [_doc] [2022-08-06T02:01:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] [node-1] triggering scheduled [ML] maintenance tasks [2022-08-06T02:01:00,001][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Deleting expired data [2022-08-06T02:01:00,002][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [node-1] Successfully deleted [0] unused stats documents [2022-08-06T02:01:00,003][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Completed deletion of expired ML data [2022-08-06T02:01:00,003][INFO ][o.e.x.m.MlDailyMaintenanceService] [node-1] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask [2022-08-06T03:30:00,012][INFO ][o.e.x.s.SnapshotRetentionTask] [node-1] starting SLM retention snapshot cleanup task [2022-08-06T03:30:00,079][INFO ][o.e.x.s.SnapshotRetentionTask] [node-1] there are no repositories to fetch, SLM retention snapshot cleanup task complete [2022-08-06T06:09:45,655][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][413256][608] duration [1.5s], collections [1]/[2.7s], total [1.5s]/[16.7s], memory [4.9gb]->[336mb]/[7.6gb], all_pools {[young] [4.5gb]->[0b]/[0b]}{[old] [329.3mb]->[329.6mb]/[7.6gb]}{[survivor] [7.6mb]->[6.3mb]/[0b]} [2022-08-06T06:09:46,159][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][413256] overhead, spent [1.5s] collecting in the last [2.7s] [2022-08-06T06:10:43,846][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][413311][609] duration [1s], collections [1]/[1.6s], total [1s]/[17.7s], memory [712mb]->[336.7mb]/[7.6gb], all_pools {[young] [376mb]->[0b]/[0b]}{[old] [329.6mb]->[329.8mb]/[7.6gb]}{[survivor] [6.3mb]->[6.8mb]/[0b]} [2022-08-06T06:10:45,505][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][413311] overhead, spent [1s] collecting in the last [1.6s] [2022-08-06T06:11:23,573][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][413347][610] duration [1s], collections [1]/[2.4s], total [1s]/[18.8s], memory [700.7mb]->[337.1mb]/[7.6gb], all_pools {[young] [364mb]->[0b]/[0b]}{[old] [329.8mb]->[330.1mb]/[7.6gb]}{[survivor] [6.8mb]->[7mb]/[0b]} [2022-08-06T06:11:24,360][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][413347] overhead, spent [1s] collecting in the last [2.4s] [2022-08-06T06:12:27,034][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][413406][611] duration [1.4s], collections [1]/[2.8s], total [1.4s]/[20.3s], memory [713.1mb]->[336.6mb]/[7.6gb], all_pools {[young] [376mb]->[4mb]/[0b]}{[old] [330.1mb]->[330.3mb]/[7.6gb]}{[survivor] [7mb]->[6.2mb]/[0b]} [2022-08-06T06:12:27,329][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][413406] overhead, spent [1.4s] collecting in the last [2.8s] [2022-08-06T06:13:38,825][WARN ][o.e.t.ThreadPool ] [node-1] execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor@dc4a5f5, interval=5s}] took [5001ms] which is above the warn threshold of [5000ms] [2022-08-06T06:15:33,264][WARN ][o.e.t.ThreadPool ] [node-1] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@72f9d65d, interval=1s}] took [5202ms] which is above the warn threshold of [5000ms] [2022-08-06T06:16:49,138][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [24.9s/24992ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T06:24:10,241][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [24.9s/24992951180ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T06:17:21,565][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] handling request [null][POST][/_bulk][Netty4HttpChannel{localAddress=/172.22.23.142:9200, remoteAddress=/172.22.23.187:16886}] took [7028ms] which is above the warn threshold of [5000ms] [2022-08-06T06:25:40,256][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [8.8m/532801ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T06:26:54,124][WARN ][o.e.c.s.MasterService ] [node-1] took [11s/11053ms] to compute cluster state update for [ilm-set-step-info {policy [kibana-event-log-policy], index [.kibana-event-log-7.17.3-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@74222c04], ilm-set-step-info {policy [ilm-history-ilm-policy], index [.ds-ilm-history-5-2022.07.22-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@2f257fe5]], which exceeds the warn threshold of [10s] [2022-08-06T06:26:10,914][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [8.8m/532800165879ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T06:27:16,300][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [2m/125487ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T06:30:43,804][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [2m/125487619482ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T06:31:21,884][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [4m/245296ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T06:33:22,771][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [4m/243928808664ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T06:40:10,413][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [8.7m/526448ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T06:43:30,910][WARN ][o.e.c.s.MasterService ] [node-1] took [27.6s/27666ms] to notify listeners on unchanged cluster state for [ilm-set-step-info {policy [metricbeat], index [metricbeat-7.17.5-2022.07.22-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@84c549ab]], which exceeds the warn threshold of [10s] [2022-08-06T06:59:59,943][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [8.7m/525340163894ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T07:08:33,201][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [28.3m/1701513ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T07:16:08,976][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [28.3m/1701467527988ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T07:06:57,608][WARN ][o.e.c.s.MasterService ] [node-1] took [55.2s/55202ms] to compute cluster state update for [ilm-set-step-info {policy [kibana-event-log-policy], index [.kibana-event-log-7.17.3-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@74222c04], ilm-set-step-info {policy [ilm-history-ilm-policy], index [.ds-ilm-history-5-2022.07.22-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@2f257fe5], ilm-set-step-info {policy [.deprecation-indexing-ilm-policy], index [.ds-.logs-deprecation.elasticsearch-default-2022.07.22-000001], currentStep [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]}[org.elasticsearch.xpack.ilm.SetStepInfoUpdateTask@831da92d]], which exceeds the warn threshold of [10s] [2022-08-06T07:23:32,510][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [15m/900712ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T07:34:34,349][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [15m/902229100437ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T07:40:20,993][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [16.7m/1006196ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T07:47:43,220][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [16.7m/1004866090579ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T07:55:17,161][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [14.9m/897910ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T08:03:01,028][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [14.9m/898392746236ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T08:25:29,972][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [30.2m/1813474ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T08:40:15,113][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [30.2m/1813913021816ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T08:48:19,265][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [22.7m/1366652ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T08:58:56,368][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [22.7m/1365145312637ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T09:07:07,648][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [18.8m/1129880ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T09:16:16,100][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [18.8m/1130608382219ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T09:25:11,299][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [18m/1083046ms] on absolute clock which is above the warn threshold of [5000ms] [2022-08-06T09:25:09,390][WARN ][o.e.c.s.MasterService ] [node-1] pending task queue has been nonempty for [43.3m/2603696ms] which is longer than the warn threshold of [300000ms]; there are currently [6] pending tasks, the oldest of which has age [56.1m/3370572ms] [2022-08-06T09:34:09,552][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [18m/1083876349910ns] on relative clock which is above the warn threshold of [5000ms] [2022-08-06T09:43:01,084][WARN ][o.e.t.ThreadPool ] [node-1] timer thread slept for [17.8m/1069320ms] on absolute clock which is above the warn threshold of [5000ms]

If you want any other logs from the image above just mention their name.

Thanks for your great help as always :slight_smile:

There's nothing there that I can see that would suggest your node is stopping, or has a request to be shutdown. And I can't see any evidence of the node restarting after being terminated.
Maybe look at your OS logs?

That's not possible unless someone or something changed it. Elasticsearch can't just adjust things itself like that.

Okay, I will just go with some OS problem. I also found nothing with elastic itself.

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.