Uneven shard size after reindex

I left ES reindexing 30 indices to 1 with 8 shards, and when seeing the shard stats when I came to work today I see this:

GET _cat/shards/graylog_0-30?v&s=shard&h=shard,store,node
shard  store node
0     41.4gb node0
1      9.8gb node5
2     97.1mb node2
3     96.3mb node4
4     41.4gb node3
5      9.8gb node1
6     95.8mb node6
7     97.5mb node7

Why is ES reindexing in such an uneven way to the shards?

This is what I executed, nothing special, and I created the index manually beforehand:

POST _reindex?wait_for_completion=false
  "source": {
    "index":  ["graylog_0s","graylog_1s","graylog_2s","graylog_3s","graylog_4s","graylog_5s","graylog_6s","graylog_7s","graylog_8s","graylog_9s","graylog_10s","graylog_11s","graylog_12s","graylog_13s","graylog_14s","graylog_15s","graylog_16s","graylog_17s","graylog_18s","graylog_19s","graylog_20s","graylog_21s","graylog_22s","graylog_23s","graylog_24s","graylog_25s","graylog_26s","graylog_27s","graylog_28s","graylog_29s","graylo g_30s"],
    "size": 10000
  "dest": {
    "index": "graylog_0-30"


What's the document count for each?

Well, I just realized that the reindex ended prematurely and it didn't reindex the total amount of documents, I don't know how I missed that. I'm now reindexing it again with conflicts=proceed to see if it works, thanks.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.