Elasticsearch Cluster health is RED

Hi Team,

We had Elasticsearch with two node and due to some infra issues the server went down . Once it become online , I started the service and it was successful. But while checking the log showing below error. Could you please let me know how to attach the log file to this thread

Please help to bring the cluster to healthy state.

>  [2024-01-23T16:08:03,947][INFO ][o.e.c.s.ClusterApplierService] [cb-2] master node changed {previous [{cb-2}{3E6yK8J6Sy-TL9k5yDGi-w}{bKpu6L7BRV-xMMWPltISCg}{cb-2}{10.10.18.174}{10.10.18.174:9300}{cdfhilmrstw}{8.9.2}], current []}, term: 309673, version: 313060, reason: becoming candidate: Publication.onCompletion(false)
> [2024-01-23T16:08:03,947][INFO ][o.e.c.f.AbstractFileWatchingService] [cb-2] shutting down watcher thread
> [2024-01-23T16:08:03,948][INFO ][o.e.c.f.AbstractFileWatchingService] [cb-2] watcher service stopped
> [2024-01-23T16:08:03,947][WARN ][o.e.c.s.MasterService    ] [cb-2] failing [node-join[{cb-3}{jLNpa0ChRZaMTtYMdO7Jgw}{JXpDZ9VQQT-5Ue9ZmjSCig}{cb-3}{10.10.18.215}{10.10.18.215:9300}{cdfhilmrstw}{8.9.2} rejoining]]: failed to commit cluster state version [313061]
> org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication of cluster state version [313061] in term [309673] failed [committed={}]
>         at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:2009) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.action.support.SubscribableListener$FailureResult.complete(SubscribableListener.java:285) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.action.support.SubscribableListener.tryComplete(SubscribableListener.java:197) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.action.support.SubscribableListener.addListener(SubscribableListener.java:96) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1915) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:114) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:165) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.cluster.coordination.Publication$PublicationTarget$PublishResponseHandler.onFailure(Publication.java:390) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.cluster.coordination.Coordinator$8.onFailure(Coordinator.java:1646) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:92) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:966) ~[elasticsearch-8.9.2.jar:?]
>         at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:28) ~[elasticsearch-8.9.2.jar:?]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
>         at java.lang.Thread.run(Thread.java:1623) ~[?:?]
> Caused by: org.elasticsearch.ElasticsearchException: java.io.IOException: Input/output error

Thanks,
Debasis

The error message Input/output error comes from the OS and normally means there's some problem with that node's storage. I'd suggest looking at its kernel logs with dmesg to confirm, and replace any faulty/suspect hardware before doing much more.

Thanks @DavidTurner for prompt response . Still I can see some I/O error even though I can write that particular mount point manually. I am checking with IT team for this error.

Thanks,
Debasis

Yeah Input/output error doesn't imply that the whole disk/partition/mount point is unwriteable, it could be a very localized problem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.