Hello everyone,
I got 6 old version 7.17 nodes and 6 new version 8.12 nodes in the stack now.
I am trying to remove old nodes. But some shards recovery failed.
Is it just impossible to recovery from a 7.17 node to an 8.12 node?
Here is the details
{
"note" : "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.",
"index" : "20240411",
"shard" : 9,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "ALLOCATION_FAILED",
"at" : "2024-04-11T14:21:26.858Z",
"failed_allocation_attempts" : 5,
"details" : "failed shard on node [sylOFunYTxGvNXUxLY1HIA]: failed recovery, failure RecoveryFailedException[[20240411][9]: Recovery failed from {elasticsearch-127}{KY-PQ3jUQYy-O1YWPT-Fyg}{FiY9Xd6eSDO0IJ8U54wwUA}{elasticsearch-127}{xxx}{xxx:9300}{dm}{7.17.4}{6000099-7170499}{xpack.installed=true, transform.node=false} into {elasticsearch-204}{sylOFunYTxGvNXUxLY1HIA}{yXmg1IH4Qha2L1l3RR8Prw}{elasticsearch-204}{xxx}{xxx:9300}{dm}{8.12.0}{7000099-8500008}{transform.config_version=10.0.0, xpack.installed=true, ml.config_version=12.0.0}]; nested: RemoteTransportException[[elasticsearch-127][xxx:9300][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[2] failed to send/replay operations]; nested: RemoteTransportException[[elasticsearch-204][xxx:9300][internal:index/shard/recovery/translog_ops]]; nested: NotSerializableExceptionWrapper[document_parsing_exception: [1:613] failed to parse field [system] of type [keyword] in document with id '34f98f0559a04d3dba84e07cca1da74c'. Preview of field's value: '{int32[]=null}']; nested: IllegalStateException[Can't get text on a START_OBJECT at 1:596]; ",
"last_allocation_status" : "no_attempt"
}