Fatal error in thread exiting

hi:
I received an error. When es imported the data, it reported an error and exited, and the cluster stuck.
Elasticsearch runs on kubernetes, using the glusterfs store.

Cluster information:
sh-4.2$ curl 127.0.0.1:9200
{
"name" : "elastic-header-1",
"cluster_name" : "es-cluster",
"cluster_uuid" : "53kTWYq7RXuyyqHojyScSQ",
"version" : {
"number" : "6.4.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "04711c2",
"build_date" : "2018-09-26T13:34:09.098244Z",
"build_snapshot" : false,
"lucene_version" : "7.4.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

log:
[2020-03-13T20:49:55,626][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [elastic-header-2.elastic-header.ism-mobile.svc.cluster.local] fatal error in thread [elasticsearch[elastic-header-2.elastic-header.ism-mobile.svc.cluster.local][refresh][T#2]], exiting
java.lang.AssertionError: Unexpected AlreadyClosedException
at org.elasticsearch.index.engine.InternalEngine.failOnTragicEvent(InternalEngine.java:1838) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:1410) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:1375) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.shard.IndexShard.refresh(IndexShard.java:880) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.IndexService.maybeRefreshEngine(IndexService.java:696) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.IndexService.access$400(IndexService.java:97) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.IndexService$AsyncRefreshTask.runInternal(IndexService.java:898) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.IndexService$BaseAsyncTask.run(IndexService.java:808) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:624) ~[elasticsearch-6.4.2.jar:6.4.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_211]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_211]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_211]
Caused by: org.apache.lucene.store.AlreadyClosedException: Underlying file changed by an external force at 2020-03-13T12:53:19Z, (lock=NativeFSLock(path=/data/app/es/data/nodes/0/indices/ZrlFhoykSkGGqOnJLU9n3g/0/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid],creationTime=2020-03-13T12:46:27Z))
at org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:191) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.store.LockValidatingDirectoryWrapper.deleteFile(LockValidatingDirectoryWrapper.java:37) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:696) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.IndexFileDeleter.deleteFiles(IndexFileDeleter.java:690) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.IndexFileDeleter.deleteNewFiles(IndexFileDeleter.java:664) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.IndexWriter.deleteNewFiles(IndexWriter.java:4983) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.IndexWriter.access$200(IndexWriter.java:211) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.IndexWriter$1.lambda$deleteUnusedFiles$0(IndexWriter.java:355) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5065) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:504) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:156) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking(ReferenceManager.java:253) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.elasticsearch.index.engine.InternalEngine$ExternalSearcherManager.refreshIfNeeded(InternalEngine.java:284) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.engine.InternalEngine$ExternalSearcherManager.refreshIfNeeded(InternalEngine.java:259) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking(ReferenceManager.java:253) ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:51:45]
at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:1396) ~[elasticsearch-6.4.2.jar:6.4.2]
... 10 more

The last-modified time of /data/app/es/data/nodes/0/indices/ZrlFhoykSkGGqOnJLU9n3g/0/index/write.lock was changed by something other than Elasticsearch. This indicates that something else is writing to Elasticsearch's data path, which is very very bad. You should work out what this "something else" is and stop it.

Elasticsearch treats this as an indication that something is very wrong with the environment and stops writing to disk to avoid any further chance of corrupting your data.

Oh, just saw this. GlusterFS is a risky choice IMO and there's no need for it.

1 Like

Thanks for your support. I'll try other storage methods.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.