Shard failed recovery upon rolling upgrade due to storeTermVector error

Hi there,

I am testing the rolling upgrade from 7.5 to 8.0 on a single node cluster. According to the documentations, I did 7.5->7.17, then 7.17->8.0. The 7.5 to 7.17 rolling upgrade went smoothly, but when I did 7.17->8.0, some indices became unavailable. Please see the following for the error message from the cluster allocation explain API.

  "index" : "chungosgr",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "ALLOCATION_FAILED",
    "at" : "2022-03-09T07:57:20.292Z",
    "failed_allocation_attempts" : 5,
    "details" : """failed shard on node [EzXbMBS3T02nBXffM2Wngw]: failed recovery, failure org.elasticsearch.indices.recovery.RecoveryFailedException: [chungosgr][0]: Recovery failed on {node-dev}{EzXbMBS3T02nBXffM2Wngw}{5cZYFkWIQEGVEj4rk3PCZw}{10.93.13.144}{10.93.13.144:9300}{cdfhilmrstw}{ml.max_jvm_size=3221225472, xpack.installed=true, ml.machine_memory=6223482880}
	at org.elasticsearch.index.shard.IndexShard.lambda$executeRecovery$21(IndexShard.java:3101)
	at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:144)
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoveryListener$6(StoreRecovery.java:383)
	at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:144)
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:439)
	at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:86)
	at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:2240)
	at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:62)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:776)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: [cdm.chungosgr-2021-11-05/474ojYZISKaq9nz9Ee8wOg][[cdm.chungosgr-2021-11-05][0]] org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery
	... 11 more
Caused by: java.lang.IllegalArgumentException: cannot change field "transcript" from storeTermVector=true to inconsistent storeTermVector=false
	at org.apache.lucene.index.FieldInfo.verifySameStoreTermVectors(FieldInfo.java:281)
	at org.apache.lucene.index.FieldInfos$FieldNumbers.verifySameSchema(FieldInfos.java:423)
	at org.apache.lucene.index.FieldInfos$FieldNumbers.addOrGet(FieldInfos.java:357)
	at org.apache.lucene.index.IndexWriter.getFieldNumberMap(IndexWriter.java:1262)
	at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1116)
	at org.elasticsearch.index.engine.InternalEngine.createWriter(InternalEngine.java:2348)
	at org.elasticsearch.index.engine.InternalEngine.createWriter(InternalEngine.java:2336)
	at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:228)
	at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:191)
	at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:14)
	at org.elasticsearch.index.shard.IndexShard.innerOpenEngineAndTranslog(IndexShard.java:1931)
	at org.elasticsearch.index.shard.IndexShard.openEngineAndRecoverFromTranslog(IndexShard.java:1895)
	at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:461)
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:88)
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:436)
	... 8 more
""",

I am unable to find anything related to this storeTermVector error in the breaking changes, nor does anything show up when searching the error message in this forum or google.

This error shows up for only half of the indices, and the other indices are fine. All the indices were ingested with the same mapping in the transcript field:

            "transcript": {
                "type": "text",
                "index": true,
                "term_vector": "with_positions_offsets",
                "analyzer": "snowball",
                "copy_to": "all",
                "fields": {
                    "folded": {
                        "type": "text",
                        "index": true,
                        "analyzer": "latin"
                    }
                }
            },

I am puzzled on how to resolve this error. Would anyone please shed some light? Any help is much appreciated!

Thanks,
Jenna

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.