Difference in shard size of primary and replica

Hi,
I have one replica. In all 2 data nodes.
Off late i have observed that the size of primary is not in sync with it replica. Though the number of documents are same.

/_cat/shards is giving below results
*yindex 1 r STARTED 91115958 237.9gb 1.1.1.1 es-data-02
myindex 1 p STARTED 91115958 235.3gb 1.1.1.2 es-data-01
myindex 5 r STARTED 44626636 92.7gb 1.1.1.1 es-data-02
myindex 5 p STARTED 44626636 92.6gb 1.1.1.2 es-data-01
myindex 3 p STARTED 72708623 127.7gb 1.1.1.1 es-data-02
myindex 3 r STARTED 72708623 127.7gb 1.1.1.2 es-data-01
myindex 2 p STARTED 15240304 33.9gb 1.1.1.1 es-data-02
myindex 2 r STARTED 15240304 35.4gb 1.1.1.2 es-data-01
myindex 4 r STARTED 8518976 13.5gb 1.1.1.1 es-data-02
myindex 4 p STARTED 8518976 13.5gb 1.1.1.2 es-data-01
myindex 7 p STARTED 20228845 40.9gb 1.1.1.1 es-data-02
myindex 7 r STARTED 20228845 42.6gb 1.1.1.2 es-data-01
myindex 6 r STARTED 12332014 28.8gb 1.1.1.1 es-data-02
myindex 6 p STARTED 12332014 30.1gb 1.1.1.2 es-data-01
myindex 0 p STARTED 1557571 2.6gb 1.1.1.1 es-data-02
myindex 0 r STARTED 1557571 2.6gb 1.1.1.2 es-data-01

Queries:

  1. Why does such case occur ?
  2. If i fire same query multiple time by connecting to the coord nodes, it gives different results inconsistently. Why ?

Any kind of help would be appreciated, we need this very urgently to solve one of our production issues.

1 Like

Merging is not coordinated across shards so primary and replica shards ca be in different stages of merging and therefore have different size.

Have you run a refresh against the index?

We have the refresh interval of 30secs. So i assume the index is refresh one.

Regarding merging, this index exist since a month or two. We just keep restarting the services. Here too i feel merging should not be a problem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.