Elasticsearch WARN log : "component": "o.e.c.s.ClusterApplierService" "message": "cluster state applier task [Publication{term=17, version=7278}] took [41.2s] which is above the warn threshold of [30s]

Hi everyone,
today we faced to this log with level WARN:

{"type": "server", "timestamp": "2021-10-26T14:25:59,820Z", "level": "WARN", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "elasticsearch_cluster", "node.name": "elasticsearch", "message": "cluster state applier task [Publication{term=17, version=7278}] took [41.2s] which is above the warn threshold of [30s]: [running task [Publication{term=17, version=7278}]] took [0ms], [connecting to new nodes] took [0ms], [applying settings] took [0ms], [running applier [org.elasticsearch.repositories.RepositoriesService@7fcc76d4]] took [0ms], [running applier [org.elasticsearch.indices.cluster.IndicesClusterStateService@5b6eeb60]] took [41247ms], [running applier [org.elasticsearch.script.ScriptService@4d3092b8]] took [0ms], [running applier [org.elasticsearch.xpack.ilm.IndexLifecycleService@77412777]] took [0ms], [running applier [org.elasticsearch.snapshots.RestoreService@6b27959d]] took [0ms], [running applier [org.elasticsearch.ingest.IngestService@47be5672]] took [0ms], [running applier [org.elasticsearch.action.ingest.IngestActionForwarder@e52a96f]] took [0ms], [running applier [org.elasticsearch.action.admin.cluster.repositories.cleanup.TransportCleanupRepositoryAction$$Lambda$4697/0x00000008016a2460@70d64d4d]] took [0ms], [running applier [org.elasticsearch.indices.TimestampFieldMapperService@278aa5b7]] took [0ms], [running applier [org.elasticsearch.tasks.TaskManager@9b0b00a]] took [0ms], [running applier [org.elasticsearch.snapshots.SnapshotsService@4b4e2854]] took [0ms], [notifying listener [org.elasticsearch.cluster.InternalClusterInfoService@7741ffaa]] took [0ms], [notifying listener [org.elasticsearch.snapshots.InternalSnapshotsInfoService@1953c05c]] took [0ms], [notifying listener [org.elasticsearch.indices.SystemIndexManager@9301238]] took [0ms], [notifying listener [org.elasticsearch.xpack.autoscaling.capacity.memory.AutoscalingMemoryInfoService$$Lambda$2982/0x000000080129a020@4fc780be]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlUpgradeModeActionFilter$$Lambda$2984/0x0000000800d8e178@b91276e]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlIndexTemplateRegistry@36cb3617]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager@27608095]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.datafeed.DatafeedManager$TaskRunner@5d74ad1a]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.inference.TrainedModelStatsService$$Lambda$3108/0x000000080133fd98@34241afa]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.inference.loadingservice.ModelLoadingService@5b30e9e7]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.process.MlMemoryTracker@7877ae3c]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlAssignmentNotifier@2097806d]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.autoscaling.MlAutoscalingDeciderService@61ecb1c8]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlInitializationService@57b02032]] took [0ms], [notifying listener [org.elasticsearch.xpack.ccr.action.ShardFollowTaskCleaner@44b7e768]] took [0ms], [notifying listener [org.elasticsearch.xpack.enrich.EnrichPolicyMaintenanceService@12d0ee7c]] took [0ms], [notifying listener [org.elasticsearch.xpack.transform.TransformClusterStateListener@34e262a9]] took [0ms], [notifying listener [org.elasticsearch.xpack.stack.StackTemplateRegistry@2154d947]] took [1ms], [notifying listener [org.elasticsearch.xpack.security.support.SecurityIndexManager@34d286f]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.support.SecurityIndexManager@66394972]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.authc.TokenService$$Lambda$3150/0x000000080137add0@41699836]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$$Lambda$3239/0x000000080138df28@46f0ffbd]] took [0ms], [notifying listener [org.elasticsearch.xpack.watcher.support.WatcherIndexTemplateRegistry@2c14b95c]] took [1ms], [notifying listener [org.elasticsearch.xpack.watcher.WatcherLifeCycleService@ca1775]] took [0ms], [notifying listener [org.elasticsearch.xpack.watcher.WatcherIndexingListener@def8772]] took [0ms], [notifying listener [org.elasticsearch.xpack.ilm.history.ILMHistoryTemplateRegistry@4ca39c65]] took [0ms], [notifying listener [org.elasticsearch.xpack.ilm.IndexLifecycleService@77412777]] took [1ms], [notifying listener [org.elasticsearch.xpack.core.slm.history.SnapshotLifecycleTemplateRegistry@45f412f4]] took [0ms], [notifying listener [org.elasticsearch.xpack.slm.SnapshotLifecycleService@1555c480]] took [0ms], [notifying listener [org.elasticsearch.xpack.slm.SnapshotRetentionService@51e62ada]] took [0ms], [notifying listener [org.elasticsearch.cluster.metadata.SystemIndexMetadataUpgradeService@4905b256]] took [0ms], [notifying listener [org.elasticsearch.cluster.metadata.TemplateUpgradeService@2e21581d]] took [0ms], [notifying listener [org.elasticsearch.node.ResponseCollectorService@117fe02f]] took [0ms], [notifying listener [org.elasticsearch.snapshots.SnapshotShardsService@62adb4ac]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.job.task.OpenJobPersistentTasksExecutor$$Lambda$4024/0x00000008014d6a78@6de3dad]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.action.TransportStartDataFrameAnalyticsAction$TaskExecutor$$Lambda$4025/0x00000008014d7848@229b6cdb]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.job.snapshot.upgrader.SnapshotUpgradeTaskExecutor$$Lambda$4026/0x0000000801598330@4c8b9e3f]] took [0ms], [notifying listener [org.elasticsearch.persistent.PersistentTasksClusterService@430c5bd6]] took [0ms], [notifying listener [org.elasticsearch.cluster.routing.DelayedAllocationService@34c8354f]] took [0ms], [notifying listener [org.elasticsearch.indices.store.IndicesStore@4d4c6ab9]] took [0ms], [notifying listener [org.elasticsearch.persistent.PersistentTasksNodeService@5da66f49]] took [0ms], [notifying listener [org.elasticsearch.license.LicenseService@7c555dc0]] took [0ms], [notifying listener [org.elasticsearch.xpack.ccr.action.AutoFollowCoordinator@2775d76b]] took [0ms], [notifying listener [org.elasticsearch.xpack.core.async.AsyncTaskMaintenanceService@49c728eb]] took [0ms], [notifying listener [org.elasticsearch.gateway.GatewayService@1faac74e]] took [0ms], [notifying listener [org.elasticsearch.indices.recovery.PeerRecoverySourceService@35f659f5]] took [0ms]", "cluster.uuid": "WEkcdmRXOYVTwDfvRplb", "node.id": "wDFG491Tre1sdfL1jGHe"  }

we have a single docker node Elasticsearch with 31 GB heap size. index rate is very high and it is about 15 milion documents for a daily index.
what is the reason for this log ? Can this problem lead to stop node?
thank you in advance.

This seem to indicate that it takes a very long time persisting a cluster state update. This is quite possibly due to slow or overloaded storage.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.