Replica shards bigger than primary on some indexes

We are seeing an issue where our replica shards on some (but not all) indexes are substantially larger than their primary counterparts. For instance:

index1 4 r STARTED 106884 1.2gb 10.10.1.98 es2
index1 4 p STARTED 106884 572.9mb 10.10.1.97 es1
index1 2 r STARTED 42643 533mb 10.10.1.104 es3
index1 2 p STARTED 42643 683mb 10.10.1.97 es1
index1 5 r STARTED 140866 1gb 10.10.1.104 es3
index1 5 p STARTED 140866 1.1gb 10.10.1.98 es2
index1 1 r STARTED 67493 577.1mb 10.10.1.98 es2
index1 1 p STARTED 67493 322.2mb 10.10.1.97 es1
index1 3 r STARTED 30176 325.8mb 10.10.1.104 es3
index1 3 p STARTED 30176 160.4mb 10.10.1.97 es1
index1 0 r STARTED 120506 577.9mb 10.10.1.104 es3
index1 0 p STARTED 120506 863.1mb 10.10.1.98 es2
index2 4 r STARTED 47578 231mb 10.10.1.98 es2
index2 4 p STARTED 47578 281mb 10.10.1.97 es1
index2 5 r STARTED 47556 282.3mb 10.10.1.104 es3
index2 5 p STARTED 47556 234.4mb 10.10.1.97 es1
index2 1 r STARTED 47228 224.5mb 10.10.1.104 es3
index2 1 p STARTED 47228 286.4mb 10.10.1.98 es2
index2 2 r STARTED 47576 256mb 10.10.1.104 es3
index2 2 p STARTED 47576 197.7mb 10.10.1.97 es1
index2 3 r STARTED 47250 200.7mb 10.10.1.98 es2
index2 3 p STARTED 47250 229.6mb 10.10.1.97 es1
index2 0 r STARTED 47715 285.6mb 10.10.1.104 es3
index2 0 p STARTED 47715 210.4mb 10.10.1.98 es2

Does anyone know what could be causing this issue?

It is hard to tell. You can check to see if some of them have more deleted documents with this API. That same API should give you the number of open contexts. Those hold on to files. The number should just be the number of concurrent searches. It doesn't have the age of the oldest context which would be useful.

Are you using routing? I see the shards of a pretty different size in general.

Sorry if I confused you - I only showed examples of the uneven shards, the others are fine. We are using routing and there are a lot of deleted documents but shouldn't the primary and the replica be the same regardless? We did have an issue previously where due to routing shards within the same index were drastically different sizes but not primary versus replica. I tried a refresh and still see same issue. Also, I am not sure how to tell with the indices stats api how many deleted documents are in a primary shard versus a replica. There are no open contexts either.