- Here is the error message:
{"@timestamp":"2023-02-23T08:43:40.767Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"elastic-cluster-0","elasticsearch.cluster.name":"elasticsearch-cluster","error.type":"org.elasticsearch.ElasticsearchException","error.message":"failed to load metadata","error.stack_trace":"org.elasticsearch.ElasticsearchException: failed to load metadata\n\tat org.elasticsearch.server@8.6.0/org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:161)\n\tat org.elasticsearch.server@8.6.0/org.elasticsearch.node.Node.start(Node.java:1354)\n\tat org.elasticsearch.server@8.6.0/org.elasticsearch.bootstrap.Elasticsearch.start(Elasticsearch.java:436)\n\tat org.elasticsearch.server@8.6.0/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:229)\n\tat org.elasticsearch.server@8.6.0/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\nCaused by: org.apache.lucene.index.CorruptIndexException: Unexpected file read error while reading index. (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/usr/share/elasticsearch/data/_state/segments_3")))\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:301)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:288)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.index.IndexFileDeleter.(IndexFileDeleter.java:166)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.index.IndexWriter.(IndexWriter.java:1158)\n\tat org.elasticsearch.server@8.6.0/org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:264)\n\tat org.elasticsearch.server@8.6.0/org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClust
erStateService.java:226)\n\tat org.elasticsearch.server@8.6.0/org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.(GatewayMetaState.java:447)\n\tat org.elasticsearch.server@8.6.0/org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:130)\n\t... 4 more\nCaused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/data/_state/_1.si\n\tat java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)\n\tat java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)\n\tat java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)\n\tat java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:181)\n\tat java.base/java.nio.channels.FileChannel.open(FileChannel.java:304)\n\tat java.base/java.nio.channels.FileChannel.open(FileChannel.java:363)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:78)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.store.Directory.openChecksumInput(Directory.java:156)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.codecs.lucene90.Lucene90SegmentInfoFormat.read(Lucene90SegmentInfoFormat.java:102)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.index.SegmentInfos.parseSegmentInfos(SegmentInfos.java:406)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:363)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:299)\n\t... 11 more\n\tSuppressed: org.apache.lucene.index.CorruptIndexException: checksum passed (39cc8ebc). possibly transient resource issue, or a Lucene or JVM bug (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/usr/share/elasticsearch/data/_state/segments_3")))\n\t\tat org.apache.lucene.core@9.4.2/org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:500)\n\t\tat org.apache.lucene.core@9.4.2/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:370)\n\t\
t... 12 more\n"}
There are three machines in the cluster, each of which is on, and I am confident that using the single node before the cluster is OK