Index stays at massive size after split

Hi,

I had mistakenly made a very large index: 1 shard of 406 GB (1.8b documents).
Then I split it into 16 shards hoping to get ~25GB sizes.

I think the new index triggered a template, since it also got 1 replica shard.
The split takes a very long time. I think it's been 5 days and it still wasn't finished.

I was running out of space, so I tried disabling replica shards.
I checked the _cat/recovery API and it showed no active recoveries.
Now it looks like this:

index            shard prirep state        docs   store
aaa-split-000005 0     p      STARTED 116465632 251.7gb
aaa-split-000005 1     p      STARTED 116473705 251.9gb
aaa-split-000005 2     p      STARTED 116463089 251.8gb
aaa-split-000005 3     p      STARTED 116449240 251.8gb
aaa-split-000005 4     p      STARTED 116463851 321.2gb
aaa-split-000005 5     p      STARTED 116466011 321.4gb
aaa-split-000005 6     p      STARTED 116476702 321.2gb
aaa-split-000005 7     p      STARTED 116467968 321.2gb
aaa-split-000005 8     p      STARTED 116443315 405.2gb
aaa-split-000005 9     p      STARTED 116470413 405.2gb
aaa-split-000005 10    p      STARTED 116451720 405.2gb
aaa-split-000005 11    p      STARTED 116479637 405.2gb
aaa-split-000005 12    p      STARTED 116463288 405.2gb
aaa-split-000005 13    p      STARTED 116457105 405.2gb
aaa-split-000005 14    p      STARTED 116489108 405.2gb
aaa-split-000005 15    p      STARTED 116456282 405.2gb

Is there any way to force it to re-hash the index? Or do I have to start over?

index splitting uses hard links and just adds deletions when doing the splitting. See Split index API | Elasticsearch Guide [7.14] | Elastic - can you check how much disk space is used on that instance?

Have you tried running force merge with only expunging deletes or have other write operations have happened to have some of those segments being merged away? See Force merge API | Elasticsearch Guide [7.14] | Elastic

Thanks spinscale.

It does take up the disk space as shown above, so I suspect my storage backend does not support hardlinking, for whatever reason.

I tried running forcemerge on the index:

POST /aaa-split-000005/_forcemerge?only_expunge_deletes=true

It timed out after a while and I don't see any changes to the shard sizes.
I checked the force_merge threadpool, but it is 0/0/0/0.

does the same happen when not specifying any parameters?

It timed out again, but now one node shows 1 active and 1 queued force_merge thread pool.
Let's see if it fixes anything.

The index has now shrunk to 2.4 TB instead of 5.4 TB, so it looks like it has been slowly deleting documents over the weekend.

It has also begun relocating some of the primary shards, so everything is slowly fixing itself.

Thanks, @spinscale!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.