Index status red with reason failed engine (reason: [merge failed])

Hi,
one of my index seems corrupt because of failed during merge process as below

org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))

current condition was the cluster health status = red along with myapp-transactions index

{
  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 314,
  "active_shards" : 314,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 306,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.645161290322584,
  "indices" : {
  ...
  ...
  "myapp-transactions" : {
      "status" : "red",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 0,
      "active_shards" : 0,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 2,
      "shards" : {
        "0" : {
          "status" : "red",
          "primary_active" : false,
          "active_shards" : 0,
          "relocating_shards" : 0,
          "initializing_shards" : 0,
          "unassigned_shards" : 2
        }
      }
    },
	...
	}
	}

if I check on _cluster/allocation/explain it return as below:

{
  "index" : "myapp-transactions",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2023-06-16T12:34:52.788Z",
    "last_allocation_status" : "no_valid_shard_copy"
  },
  "can_allocate" : "no_valid_shard_copy",
  "allocate_explanation" : "cannot allocate because all found copies of the shard are either stale or corrupt",
  "node_allocation_decisions" : [
    {
      "node_id" : "Jli45SwMTo6fDZ8nt7u7_g",
      "node_name" : "MYAPPGWSERVER",
      "transport_address" : "192.15.21.15:9300",
      "node_attributes" : {
        "ml.machine_memory" : "8017039360",
        "xpack.installed" : "true",
        "ml.max_open_jobs" : "20"
      },
      "node_decision" : "no",
      "store" : {
        "in_sync" : true,
        "allocation_id" : "1QpYyUyZQc6fgTDnZILY0g",
        "store_exception" : {
          "type" : "corrupt_index_exception",
          "reason" : "failed engine (reason: [merge failed]) (resource=preexisting_corruption)",
          "caused_by" : {
            "type" : "i_o_exception",
            "reason" : "failed engine (reason: [merge failed])",
            "caused_by" : {
              "type" : "corrupt_index_exception",
              "reason" : "docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path=\"/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc\")))"
            }
          }
        }
      }
    }
  ]
}

is there any other way to recover the index or recover partially? or is there any workaround for this issue.

FYI storage usage is in 81%.

Thank you

What version are you using?

Also there should be a log entry about the exception which would include the full stack trace. Could you share that here please?

we are using version 7.5.1.

is that log exception on elasticsearch.log ? if yes, below are full log message

[2023-06-16T01:30:00,045][INFO ][o.e.x.m.MlDailyMaintenanceService] [P1KSGWKIBANA] triggering scheduled [ML] maintenance tasks
[2023-06-16T01:30:00,349][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [P1KSGWKIBANA] Deleting expired data
[2023-06-16T01:30:01,022][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [P1KSGWKIBANA] Completed deletion of expired ML data
[2023-06-16T01:30:01,089][INFO ][o.e.x.m.MlDailyMaintenanceService] [P1KSGWKIBANA] Successfully completed [ML] maintenance tasks
[2023-06-16T06:08:30,096][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [P1KSGWKIBANA] Get stats for datafeed '_all'
[2023-06-16T06:08:30,096][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [P1KSGWKIBANA] Get stats for datafeed '_all'
[2023-06-16T06:08:30,096][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [P1KSGWKIBANA] Get stats for datafeed '_all'
[2023-06-16T07:02:27,906][DEBUG][o.e.a.a.c.n.t.c.TransportCancelTasksAction] [P1KSGWKIBANA] Received ban for the parent [Jli45SwMTo6fDZ8nt7u7_g:69834072] on the node [Jli45SwMTo6fDZ8nt7u7_g], reason: [by user request]
[2023-06-16T07:02:27,912][DEBUG][o.e.a.a.c.n.t.c.TransportCancelTasksAction] [P1KSGWKIBANA] Sending remove ban for tasks with the parent [Jli45SwMTo6fDZ8nt7u7_g:69834072] to the node [Jli45SwMTo6fDZ8nt7u7_g]
[2023-06-16T07:02:27,913][DEBUG][o.e.a.a.c.n.t.c.TransportCancelTasksAction] [P1KSGWKIBANA] Removing ban for the parent [Jli45SwMTo6fDZ8nt7u7_g:69834072] on the node [Jli45SwMTo6fDZ8nt7u7_g]
[2023-06-16T07:46:45,078][INFO ][o.e.m.j.JvmGcMonitorService] [P1KSGWKIBANA] [gc][1824937] overhead, spent [270ms] collecting in the last [1s]
[2023-06-16T07:47:30,180][INFO ][o.e.m.j.JvmGcMonitorService] [P1KSGWKIBANA] [gc][1824982] overhead, spent [324ms] collecting in the last [1s]
[2023-06-16T08:30:00,010][INFO ][o.e.x.s.SnapshotRetentionTask] [P1KSGWKIBANA] starting SLM retention snapshot cleanup task
[2023-06-16T08:40:51,021][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [MYAPPGWSERVER] Get stats for datafeed '_all'
[2023-06-16T08:40:51,196][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [MYAPPGWSERVER] Get stats for datafeed '_all'
[2023-06-16T08:40:51,249][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [MYAPPGWSERVER] Get stats for datafeed '_all'
[2023-06-16T11:57:20,616][WARN ][o.e.i.e.Engine           ] [MYAPPGWSERVER] [myapp-transactions][0] failed engine [merge failed]
org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.elasticsearch.index.engine.InternalEngine$EngineMergeScheduler$2.doRun(InternalEngine.java:2396) [elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773) [elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.5.1.jar:7.5.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:236) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:148) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:865) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:169) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:245) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:140) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4463) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4057) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:101) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
[2023-06-16T11:57:20,702][WARN ][o.e.i.c.IndicesClusterStateService] [MYAPPGWSERVER] [myapp-transactions][0] marking and sending shard failed due to [shard failure, reason [merge failed]]
org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.elasticsearch.index.engine.InternalEngine$EngineMergeScheduler$2.doRun(InternalEngine.java:2396) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.5.1.jar:7.5.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:236) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:148) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:865) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:169) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:245) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:140) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4463) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4057) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:101) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
[2023-06-16T11:57:20,778][WARN ][o.e.c.r.a.AllocationService] [MYAPPGWSERVER] failing shard [failed shard, shard [myapp-transactions][0], node[Jli45SwMTo6fDZ8nt7u7_g], [P], s[STARTED], a[id=1QpYyUyZQc6fgTDnZILY0g], message [shard failure, reason [merge failed]], failure [MergeException[org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))]; nested: CorruptIndexException[docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))]; ], markAsStale [true]]
org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.elasticsearch.index.engine.InternalEngine$EngineMergeScheduler$2.doRun(InternalEngine.java:2396) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.5.1.jar:7.5.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:236) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:148) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:865) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:169) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:245) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:140) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4463) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4057) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:101) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
[2023-06-16T11:57:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:57:50,389][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:57:50,392][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:10,383][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:10,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:10,387][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,383][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,400][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:59:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:59:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:59:50,392][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:10,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:10,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:10,390][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:01:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:01:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:01:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:10,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:10,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:10,390][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,391][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:03:50,379][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:03:50,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:03:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:10,378][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:10,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:10,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,378][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,378][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:05:50,377][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:05:50,380][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:05:50,389][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:10,376][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:10,379][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:10,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,376][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,377][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,379][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,380][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:07:06,772][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]

we are using version 7.5.1.
is the exception message on the elasticsearch.log ?
if yes, this is the message on elasticsearch log during the incident

https://pastebin.com/embed_js/mNDcqcQD

below are the message from elasticsearch.log during the incident

[2023-06-16T01:30:00,045][INFO ][o.e.x.m.MlDailyMaintenanceService] [P1KSGWKIBANA] triggering scheduled [ML] maintenance tasks
[2023-06-16T01:30:00,349][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [P1KSGWKIBANA] Deleting expired data
[2023-06-16T01:30:01,022][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [P1KSGWKIBANA] Completed deletion of expired ML data
[2023-06-16T01:30:01,089][INFO ][o.e.x.m.MlDailyMaintenanceService] [P1KSGWKIBANA] Successfully completed [ML] maintenance tasks
[2023-06-16T06:08:30,096][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [P1KSGWKIBANA] Get stats for datafeed '_all'
[2023-06-16T06:08:30,096][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [P1KSGWKIBANA] Get stats for datafeed '_all'
[2023-06-16T06:08:30,096][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [P1KSGWKIBANA] Get stats for datafeed '_all'
[2023-06-16T07:02:27,906][DEBUG][o.e.a.a.c.n.t.c.TransportCancelTasksAction] [P1KSGWKIBANA] Received ban for the parent [Jli45SwMTo6fDZ8nt7u7_g:69834072] on the node [Jli45SwMTo6fDZ8nt7u7_g], reason: [by user request]
[2023-06-16T07:02:27,912][DEBUG][o.e.a.a.c.n.t.c.TransportCancelTasksAction] [P1KSGWKIBANA] Sending remove ban for tasks with the parent [Jli45SwMTo6fDZ8nt7u7_g:69834072] to the node [Jli45SwMTo6fDZ8nt7u7_g]
[2023-06-16T07:02:27,913][DEBUG][o.e.a.a.c.n.t.c.TransportCancelTasksAction] [P1KSGWKIBANA] Removing ban for the parent [Jli45SwMTo6fDZ8nt7u7_g:69834072] on the node [Jli45SwMTo6fDZ8nt7u7_g]
[2023-06-16T07:46:45,078][INFO ][o.e.m.j.JvmGcMonitorService] [P1KSGWKIBANA] [gc][1824937] overhead, spent [270ms] collecting in the last [1s]
[2023-06-16T07:47:30,180][INFO ][o.e.m.j.JvmGcMonitorService] [P1KSGWKIBANA] [gc][1824982] overhead, spent [324ms] collecting in the last [1s]
[2023-06-16T08:30:00,010][INFO ][o.e.x.s.SnapshotRetentionTask] [P1KSGWKIBANA] starting SLM retention snapshot cleanup task
[2023-06-16T08:40:51,021][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [MYAPPGWSERVER] Get stats for datafeed '_all'
[2023-06-16T08:40:51,196][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [MYAPPGWSERVER] Get stats for datafeed '_all'
[2023-06-16T08:40:51,249][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [MYAPPGWSERVER] Get stats for datafeed '_all'
[2023-06-16T11:57:20,616][WARN ][o.e.i.e.Engine           ] [MYAPPGWSERVER] [myapp-transactions][0] failed engine [merge failed]
org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.elasticsearch.index.engine.InternalEngine$EngineMergeScheduler$2.doRun(InternalEngine.java:2396) [elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773) [elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.5.1.jar:7.5.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:236) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:148) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:865) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:169) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:245) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:140) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4463) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4057) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:101) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
[2023-06-16T11:57:20,702][WARN ][o.e.i.c.IndicesClusterStateService] [MYAPPGWSERVER] [myapp-transactions][0] marking and sending shard failed due to [shard failure, reason [merge failed]]
org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.elasticsearch.index.engine.InternalEngine$EngineMergeScheduler$2.doRun(InternalEngine.java:2396) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.5.1.jar:7.5.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:236) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:148) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:865) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:169) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:245) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:140) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4463) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4057) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:101) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
[2023-06-16T11:57:20,778][WARN ][o.e.c.r.a.AllocationService] [MYAPPGWSERVER] failing shard [failed shard, shard [myapp-transactions][0], node[Jli45SwMTo6fDZ8nt7u7_g], [P], s[STARTED], a[id=1QpYyUyZQc6fgTDnZILY0g], message [shard failure, reason [merge failed]], failure [MergeException[org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))]; nested: CorruptIndexException[docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))]; ], markAsStale [true]]
org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.elasticsearch.index.engine.InternalEngine$EngineMergeScheduler$2.doRun(InternalEngine.java:2396) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.5.1.jar:7.5.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: org.apache.lucene.index.CorruptIndexException: docs out of order (594 <= 594 ) (resource=RateLimitedIndexOutput(FSIndexOutput(path="/home/myapp/myapp-logs/elasticsearch/data/nodes/0/indices/E7qQ8gWrSSSF3SAaE_gtBQ/0/index/_3kwe9_Lucene50_0.doc")))
	at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:236) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:148) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:865) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:169) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:245) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:140) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4463) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4057) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
	at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:101) ~[elasticsearch-7.5.1.jar:7.5.1]
	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662) ~[lucene-core-8.3.0.jar:8.3.0 2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25 23:10:03]
[2023-06-16T11:57:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:57:50,389][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:57:50,392][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:10,383][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:10,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:10,387][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,383][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:58:50,400][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:59:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:59:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T11:59:50,392][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:10,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:10,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:10,390][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:00:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:01:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:01:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:01:50,388][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:10,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:10,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:10,390][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,386][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:02:50,391][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:03:50,379][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:03:50,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:03:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:10,378][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:10,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:10,384][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,378][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,378][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:04:50,385][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:05:50,377][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:05:50,380][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:05:50,389][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:10,376][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:10,379][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:10,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,376][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,377][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,379][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,380][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,381][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:06:50,382][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]
[2023-06-16T12:07:06,772][DEBUG][o.e.a.s.TransportSearchAction] [MYAPPGWSERVER] All shards failed for phase: [query]

Yes that's the right log, thanks. This could be a Lucene bug but I can't find an obvious fix, nor other similar reports, so I think it's more likely it's an exogenous corruption. The version you're using is very old and long past EOL so I'm not sure it's going to be possible to investigate further.

To recover, restore the index from a recent snapshot.

Thank you for your fast response.
Seems there's no other way to recover the corrupt index beside restore from recent snapshot.

Can you please explain about maybe the root cause of what's happen ? is there something that could trigger failed to merge ?

Also related to the version 7.5.1 that too old and long past EOL. if I upgrade to version 7.17, is that version can be helped with troubleshooting if there is any problem in future? or what is the minimum version that currently can be troubleshoot by the elastic team ?

Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.