Hi,
I have one replica. In all 2 data nodes.
Off late i have observed that the size of primary is not in sync with it replica. Though the number of documents are same.
/_cat/shards is giving below results
*yindex 1 r STARTED 91115958 237.9gb 1.1.1.1 es-data-02
myindex 1 p STARTED 91115958 235.3gb 1.1.1.2 es-data-01
myindex 5 r STARTED 44626636 92.7gb 1.1.1.1 es-data-02
myindex 5 p STARTED 44626636 92.6gb 1.1.1.2 es-data-01
myindex 3 p STARTED 72708623 127.7gb 1.1.1.1 es-data-02
myindex 3 r STARTED 72708623 127.7gb 1.1.1.2 es-data-01
myindex 2 p STARTED 15240304 33.9gb 1.1.1.1 es-data-02
myindex 2 r STARTED 15240304 35.4gb 1.1.1.2 es-data-01
myindex 4 r STARTED 8518976 13.5gb 1.1.1.1 es-data-02
myindex 4 p STARTED 8518976 13.5gb 1.1.1.2 es-data-01
myindex 7 p STARTED 20228845 40.9gb 1.1.1.1 es-data-02
myindex 7 r STARTED 20228845 42.6gb 1.1.1.2 es-data-01
myindex 6 r STARTED 12332014 28.8gb 1.1.1.1 es-data-02
myindex 6 p STARTED 12332014 30.1gb 1.1.1.2 es-data-01
myindex 0 p STARTED 1557571 2.6gb 1.1.1.1 es-data-02
myindex 0 r STARTED 1557571 2.6gb 1.1.1.2 es-data-01
Queries:
- Why does such case occur ?
- If i fire same query multiple time by connecting to the coord nodes, it gives different results inconsistently. Why ?
Any kind of help would be appreciated, we need this very urgently to solve one of our production issues.