Hi,
I'm currently migrating our indices from Elasticsearch 1.x to ElasticSearch 2.3.1.
I had to do it due to many of our even fields containing dots, so being exposed to the related breaking change, so scripted the migration to perform a search and replace on any of those dots in the json field names.
As I was validating the advancement of the migration, I typically used _GET cat/indices/myindices*?v to get a quick view of the indices being migrated.
Due to the docs.count being different between the indices on 1.x and 2.x, I started to dig deeper. The more puzzling aspect being that the docs.count on the 2.x was higher than on the 1.x (and only input to Elasticsearch 2.x was my migration script).
Checking the docs count through GET myindices-2016.04.05/_count on 2.x didn't return the same information than through the _cat/indices interface, but did match the expected count.
On 1.x, both interfaces return the same value.
Would someone have an idea how this came to be?
###Elasticsearch 1.x:
####GET _cat/indices/myindice-2016.04.05?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open audit-2016.04.05 5 0 3129036 0 62.7gb 62.7gb
####GET myindice-2016.04.05/_count
{
"count": 3129036,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
}
}
###Elasticsearch 2.x:
####GET _cat/indices/myindice-2016.04.05?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open myindice-2016.04.05 1 0 3409028 0 24.5gb 24.5gb
####GET myindice-2016.04.05/_count
{
"count": 3129036,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
}
}
###Elasticsearch 2.x exact version:
"version": {
"number": "2.3.1",
"build_hash": "bd980929010aef404e7cb0843e61d0665269fc39",
"build_timestamp": "2016-04-04T12:25:05Z",
"build_snapshot": false,
"lucene_version": "5.5.0"
}
Please note that the differences in size is normal, as I'm performing some cleanup of the events while migrating them. The difference as to the number of shards per index is also intended, as the number of index we'd like to keep open requires us to reduce the number of shards per index or increase the size of our cluster, the latter not being an option at the moment.