CorruptIndexException: docs out of order

Hi,
I am running elasticsearch v8.2.2 as part of our ELK stack on a Windows Server 2019. I seem to have a corrupt index. Health status has been red due to an unassigned shard. I have no snapshot or replica. Data is not critical so I can live well with data loss but would be great if I can get the application up an running again (i.e. health status back to 'green' and have rules and connectors running again). So I tried

POST "https://localhost:9200/_cluster/reroute" -H 'Content-Type: application/json' -d '
{
    "commands": [{
        "allocate_empty_primary": {
            "index": ".kibana_task_manager_8.2.2_001",
            "shard": 0,
            "node": "sps-rer",
"accept_data_loss":true
        }
    }]
}'

Still the shard is unassigned. Checking the shard status via

GET "https://localhost:9200/_cat/shards?v&h=n,index,shard,prirep,state,sto,sc,unassigned.reason,unassigned.details&s=sto,index"

shows the 'CorruptIndexException'

 .kibana_task_manager_8.2.2_001                                0     p      UNASSIGNED            ALLOCATION_FAILED failed shard on node [VuiwHmAaRhuNflJrkrHh4g]: failed recovery, failure org.elasticsearch.indices.recovery.RecoveryFailedException: [.kibana_task_manager_8.2.2_001][0]: Recovery failed on {sps-rer}{VuiwHmAaRhuNflJrkrHh4g}{vj6XZiQpRuO2fAlmtZPAPg}{192.168.1.148}{192.168.1.148:9300}{cdfhilmrstw}{ml.machine_memory=8588910592, xpack.installed=true, ml.max_jvm_size=2147483648}
        at org.elasticsearch.index.shard.IndexShard.lambda$executeRecovery$20(IndexShard.java:3098)
        at org.elasticsearch.action.ActionListener$2.onFailure(ActionListener.java:170)
        at org.elasticsearch.index.shard.StoreRecovery.lambda$recoveryListener$6(StoreRecovery.java:375)
        at org.elasticsearch.action.ActionListener$2.onFailure(ActionListener.java:170)
        at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:465)
        at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:86)
        at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:2239)
        at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:62)
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773)
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: [.kibana_task_manager_8.2.2_001/NGjOQ7e7TmWpN4H4Xy742g][[.kibana_task_manager_8.2.2_001][0]] org.elasticsearch.index.shard.IndexShardRecoveryException: failed to fetch index version after copying it over
        at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:428)
        at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:88)
        at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:462)
        ... 8 more
Caused by: org.apache.lucene.index.CorruptIndexException: failed engine (reason: [merge failed]) (resource=preexisting_corruption)
        at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:611)
        at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:593)
        at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:402)
        ... 10 more
Caused by: java.io.IOException: failed engine (reason: [merge failed])
        at org.elasticsearch.index.engine.Engine.failEngine(Engine.java:1140)
        at org.elasticsearch.index.engine.InternalEngine$EngineMergeScheduler$2.doRun(InternalEngine.java:2573)
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773)
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.lucene.index.CorruptIndexException: docs out of order (55 <= 13621 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="C:\elastic\data\indices\NGjOQ7e7TmWpN4H4Xy742g\0\index\_h30qb_Lucene90_0.doc")))
        at org.apache.lucene.codecs.lucene90.Lucene90PostingsWriter.startDoc(Lucene90PostingsWriter.java:230)
        at org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:145)
        at org.apache.lucene.codecs.lucene90.blocktree.Lucene90BlockTreeTermsWriter$TermsWriter.write(Lucene90BlockTreeTermsWriter.java:1024)
        at org.apache.lucene.codecs.lucene90.blocktree.Lucene90BlockTreeTermsWriter.write(Lucene90BlockTreeTermsWriter.java:367)
        at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:95)
        at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:204)
        at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:208)
        at org.apache.lucene.index.SegmentMerger.mergeWithLogging(SegmentMerger.java:293)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:136)
        at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4964)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4500)
        at org.apache.lucene.index.IndexWriter$IndexWriterMergeSource.merge(IndexWriter.java:6252)
        at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:638)
        at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:118)
        at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:699)

There is still about 20GB of space left on the drive. In a second attempt after creating a restore point of the VM that I eventually used I deleted the index: allowed my user to manipulate restricted indices and executed

DELETE "https://localhost:9200/.kibana_task_manager_8.2.2_001"

hoping that kibana might recreate the missing index. The index was gone but recreation did not happen. So I restored the VM before index deletion and am back to the CorruptIndexException.

Is there a chance of getting this setup up and running again or do I have to uninstall/reinstall?