Cluster health status changed from [YELLOW] to [RED] (reason: [auto-create] after changing path.data and path.log

I have installed elasticsearch-8.4.3-x86_64 and changed the path.data and path.log before I started the service.
Permissions to the new pathes are more than needed as following:
drwxrwsrwx. 2 elasticsearch elasticsearch 2048 Dec 9 09:10 log
drwxrwsrwx. 4 elasticsearch elasticsearch 4096 Dec 9 09:12 lib

In the log I see this:

[2022-12-09T09:10:30,681][INFO ][o.e.c.r.a.AllocationService] [alq-master] current.health="RED" message="Cluster health status changed from [YELLOW] to [RED] (reason: [auto-create])." previous.health="YELLOW" reason="auto-create"

Here is some other stacktrace:

[2022-12-09T09:12:00,826][WARN ][o.e.x.m.e.l.LocalExporter] [alq-serkep-master] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: org.elasticsearch.action.UnavailableShardsException: [.monitoring-es-7-2022.12.09][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-es-7-2022.12.09][0]] containing [4] requests]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128) ~[?:?]
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
        at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) ~[?:?]
        at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) ~[?:?]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
        at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?]
        at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?]
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
        at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:129) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:110) ~[?:?]
        at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:162) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:31) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.client.internal.node.NodeClient$SafelyWrappedActionListener.onResponse(NodeClient.java:160) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:192) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:186) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:31) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$2(SecurityActionFilter.java:165) ~[?:?]
        at org.elasticsearch.action.ActionListener$DelegatingFailureActionListener.onResponse(ActionListener.java:245) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.ActionListener$RunBeforeActionListener.onResponse(ActionListener.java:415) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:612) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:607) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.client.internal.node.NodeClient$SafelyWrappedActionListener.onFailure(NodeClient.java:170) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.tasks.TaskManager$1.onFailure(TaskManager.java:201) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onFailure(ContextPreservingActionListener.java:38) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.ActionListener$Delegating.onFailure(ActionListener.java:92) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:1041) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:1013) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:1073) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:873) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:1032) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:642) ~[elasticsearch-8.4.3.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:710) ~[elasticsearch-8.4.3.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
        at java.lang.Thread.run(Thread.java:833) ~[?:?]
Caused by: org.elasticsearch.action.UnavailableShardsException: [.monitoring-es-7-2022.12.09][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-es-7-2022.12.09][0]] containing [4] requests]
        ... 11 more

I can neither log in nor reset the password.

When I change the path.data to the old path (/var/lib/elasticsearch) it works fine. And no I had nothing and the old path. To be clear there was no elasticsearch folder in /var/lib/ or in /var/log/ when I installed elasticsearch. Right after installation I changed the pathes in the elasticseach.yml.
So, when the services was started there was no elasticsearch folder in /var/lib or /log.

Does the disk have enough free space to host path.data? Maybe look into the log files to check if there were any warning messages. For example, if the disk is filled above the watermark thresholds, shards won't be allocated and can result into RED cluster status.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.