Inconsistent results when sorting on index order

I have a weird situation on ES 5.6.3 when using search_after and sorting on index order using _doc. The following query exhibits the behavior (the query part is not important):

{
  "from": 0,
  "size": 25,
  "query": {
    ...
  },
  "_source": ["id"],
  "sort": [
    {
      "id": {
        "order": "asc"
      }
    },
    {
      "_doc": {
        "order": "asc"
      }
    }
  ],
  "search_after": [
    "955411180397631",
    "28526432"
  ]
}

When I execute the above query, I always get the same number of results (i.e. hits.total is consistent), however, the hits array is sometimes empty (which is correct and what I'm expecting):

{
  "took": 3,
  "timed_out": false,
  "_shards": {
    "total": 28,
    "successful": 28,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 25,
    "max_score": null,
    "hits": [
    ]
  }
}

And sometimes I get another result which I'm not expecting since the document with the id 955411180397631 was already contained in the previous response:

{
  "took": 10,
  "timed_out": false,
  "_shards": {
    "total": 28,
    "successful": 28,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 25,
    "max_score": null,
    "hits": [
      {
        "_index": "myindex",
        "_type": "doc",
        "_id": "955411180397631",
        "_score": null,
        "sort": [
          955411180397631,
          29499628
        ]
      }
    ]
  }
}

The way I'm fixing this now is by adding ?preference=_primary to the URL. So it seems like the index order on the primary shard and replica shard where the document is stored is not consistent. The index order on the primary shard is 28526432 and the index order on the replica shard is 29499628. So it seems that depending on the shard that gets hit by the coordinating node, I will either get the document (when replica is hit) or not (when primary is hit).

When using ?preference=_replica I always get the additional document and when using ?preference=_primary I never get it, which is what I'm expecting.

Is this behavior to be expected or is something wrong in our index?

I'm wondering if setting index.number_of_replicas to 0 and back to 1 (i.e. making fresh copies of the primary shards) would take care of the issue as well and make sure that the index order on both the primary and replica shards are identical. Any thoughts?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.