As the server hard disk no enough and I delete the file under /elasticsearch/nodes/0/indices

{"type": "server", "timestamp": "2022-04-18T16:40:26,280+08:00", "level": "WARN", "component": "o.e.c.s.MasterService", "cluster.name": "elasticsearch", "node.name": "jenkirsappu01", "message": "failing [create-index [itrds-k8s-prod-2022.04.15], cause [auto(bulk api)]]: failed to commit cluster state version [7842]", "cluster.uuid": "_nXW6A0hT6KcYFzpBJR0lg", "node.id": "ADscaSe-RkaPNm_HpXYdTA" ,
"stacktrace": ["org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed",
"at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1429) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:60) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:225) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:93) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:55) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1349) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cluster.coordination.Publication.access$500(Publication.java:42) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cluster.coordination.Publication$PublicationTarget$PublishResponseHandler.onFailure(Publication.java:369) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cluster.coordination.Coordinator$5.onFailure(Coordinator.java:1117) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cluster.coordination.PublicationTransportHandler$2$1.onFailure(PublicationTransportHandler.java:205) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cluster.coordination.PublicationTransportHandler.lambda$sendClusterStateToNode$6(PublicationTransportHandler.java:271) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cluster.coordination.PublicationTransportHandler$3.handleException(PublicationTransportHandler.java:289) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1120) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1120) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1229) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.transport.TransportService$DirectResponseChannel$2.run(TransportService.java:1208) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]",
"at java.lang.Thread.run(Thread.java:830) [?:?]",
"Caused by: org.elasticsearch.common.util.concurrent.UncategorizedExecutionException: Failed execution",
"at org.elasticsearch.common.util.concurrent.FutureUtils.rethrowExecutionException(FutureUtils.java:95) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:84) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:98) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.4.2.jar:7.4.2]",
"... 20 more",
"Caused by: java.util.concurrent.ExecutionException: java.nio.file.AccessDeniedException: /elasticsearch-data/elasticsearch/nodes/0/indices/_VZRlYQgTl6phVviKBbMzA",
"at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.getValue(BaseFuture.java:266) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:239) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:65) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:77) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:98) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.4.2.jar:7.4.2]",
"... 20 more",

I got this error, what can I do to fix it.

Hi @Whybee Welcome to the commuity.

Unfortunately manually / directly deleting data files from the elaticsearch data directory is not supported and will corrupt / make Elasticsearch unusable.

Next time perhaps ask for help first :slight_smile: there are ways to properly clean out the data.

At this point most likely that entire node is probably corrupt unless you kept an exact copy of the data and can put it back (that may not even work at this point) or if you have been keeping snapshot and restore the data from scratch... neither is very easy.

Is the node actually up and running?

How many nodes did you have in the cluster?

Are the indices have replicas?

Are other nodes running?

How important is the rest of the data?

thanks your reply, I fix it today, by curl -XDELETE http://myserver.com:9200/.kibana_2, thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.