We are seeing an issue where our replica shards on some (but not all) indexes are substantially larger than their primary counterparts. For instance:
index1 4 r STARTED 106884 1.2gb 10.10.1.98 es2
index1 4 p STARTED 106884 572.9mb 10.10.1.97 es1
index1 2 r STARTED 42643 533mb 10.10.1.104 es3
index1 2 p STARTED 42643 683mb 10.10.1.97 es1
index1 5 r STARTED 140866 1gb 10.10.1.104 es3
index1 5 p STARTED 140866 1.1gb 10.10.1.98 es2
index1 1 r STARTED 67493 577.1mb 10.10.1.98 es2
index1 1 p STARTED 67493 322.2mb 10.10.1.97 es1
index1 3 r STARTED 30176 325.8mb 10.10.1.104 es3
index1 3 p STARTED 30176 160.4mb 10.10.1.97 es1
index1 0 r STARTED 120506 577.9mb 10.10.1.104 es3
index1 0 p STARTED 120506 863.1mb 10.10.1.98 es2
index2 4 r STARTED 47578 231mb 10.10.1.98 es2
index2 4 p STARTED 47578 281mb 10.10.1.97 es1
index2 5 r STARTED 47556 282.3mb 10.10.1.104 es3
index2 5 p STARTED 47556 234.4mb 10.10.1.97 es1
index2 1 r STARTED 47228 224.5mb 10.10.1.104 es3
index2 1 p STARTED 47228 286.4mb 10.10.1.98 es2
index2 2 r STARTED 47576 256mb 10.10.1.104 es3
index2 2 p STARTED 47576 197.7mb 10.10.1.97 es1
index2 3 r STARTED 47250 200.7mb 10.10.1.98 es2
index2 3 p STARTED 47250 229.6mb 10.10.1.97 es1
index2 0 r STARTED 47715 285.6mb 10.10.1.104 es3
index2 0 p STARTED 47715 210.4mb 10.10.1.98 es2
Does anyone know what could be causing this issue?