I deleted some of my indices, but after about 1~10s the cluster create another index(being deleted long time ago) automatically.
request-log-2018-05-30, but the index
request-log-2018-05-30 was deleted before a week or earlier.
I get following info in pending tasks:
insertOrder timeInQueue priority source
1580 7.3s URGENT delete-index [[request-log-2018-05-03/d8u11H0QTMSb1QVN5I-uGg]]
1581 6s HIGH put-mapping
1582 76ms NORMAL allocation dangled indices [request-log-2018-05-30]
ES version: 6.3.0
@bleskes could you please have a look?
Is it possible that you could have a split-brain scenario? Which version of Elasticsearch are you using? How many nodes do you have in the cluster? How many of these are master-eligible? What is your
minimum_master_nodes set to?
there are 3 master-eligible nodes in my cluster and the value of
minimum_master_nodes is set to 2.
25 nodes total in my cluster.
@Christian_Dahlqvist any ideas?
And the indices are taged by
curl -s -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED
log-2018-05-24 3 p UNASSIGNED DANGLING_INDEX_IMPORTED
log-2018-05-24 3 r UNASSIGNED DANGLING_INDEX_IMPORTED
log-2018-05-24 4 p UNASSIGNED DANGLING_INDEX_IMPORTED
but all of the indices are empty and
I deleted them about one month ago.
Solved by allocating all the failed shards instead of deleting them.
After allocated these indices , you can delete them.
Note: this operation may cause data loss.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.