Deleting an index does not seem to delete external versioning

I am running into an issue using the external versioning where deleting the index does not seem to clear the external versioning for that given index.

Steps to recreate:

  1. Create an index
  2. Put an item with
id: 'someid',
version: 0,
version_type: 'external',

into the index

  1. delete the index
  2. Wait arbitrarily long
  3. Create the same index
  4. Put the same item with
id: 'someid',
version: 0,
version_type: 'external',

into the index and I get a version_conflict_engine_exception

"name": "ResponseError",
  "meta": {
    "body": {
      "error": {
        "root_cause": [
          {
            "type": "version_conflict_engine_exception",
            "reason": "[contact_v1][someid]: version conflict, current version [1] is higher or equal to the one provided [1]",
            "index_uuid": "PvmFtcibTfmBCu8C7iQ43Q",
            "shard": "0",
            "index": "stpze-contacts-3"
          }
        ],
        "type": "version_conflict_engine_exception",
        "reason": "[contact_v1][someid]: version conflict, current version [1] is higher or equal to the one provided [1]",
        "index_uuid": "PvmFtcibTfmBCu8C7iQ43Q",
        "shard": "0",
        "index": "stpze-contacts-3"
      },
      "status": 409
    },
    "statusCode": 409,

My elastic cluster is running in a docker container

{
  "name": "0_cod3F",
  "cluster_name": "docker-cluster",
  "cluster_uuid": "Cvy3FFwdTuuwXHG8S6ws6A",
  "version": {
    "number": "6.4.3",
    "build_flavor": "default",
    "build_type": "tar",
    "build_hash": "fe40335",
    "build_date": "2018-10-30T23:17:19.084789Z",
    "build_snapshot": false,
    "lucene_version": "7.4.0",
    "minimum_wire_compatibility_version": "5.6.0",
    "minimum_index_compatibility_version": "5.0.0"
  },
  "tagline": "You Know, for Search"
}

When printing all entries in the index it returns an empty list

{
    "took": 1,
    "timed_out": false,
    "_shards": {
        "total": 3,
        "successful": 3,
        "skipped": 0,
        "failed": 0
    },
    "hits": {
        "total": 0,
        "max_score": null,
        "hits": []
    }
}

Is there some extra step involved in deleting the external verisoning metadata?

I cannot reproduce this. On an empty 6.4.3 cluster:

GET /

# 200 OK
# {
#   "cluster_name": "elasticsearch",
#   "tagline": "You Know, for Search",
#   "name": "node-0",
#   "version": {
#     "build_snapshot": false,
#     "build_hash": "fe40335",
#     "minimum_index_compatibility_version": "5.0.0",
#     "build_flavor": "default",
#     "minimum_wire_compatibility_version": "5.6.0",
#     "build_date": "2018-10-30T23:17:19.084789Z",
#     "number": "6.4.3",
#     "build_type": "tar",
#     "lucene_version": "7.4.0"
#   },
#   "cluster_uuid": "RaRnL4v2TLyhRajRMpWaQw"
# }

PUT /i/_doc/someid?version=0&version_type=external
{}

# 201 Created
# {
#   "_type": "_doc",
#   "_primary_term": 1,
#   "_id": "someid",
#   "_shards": {
#     "successful": 1,
#     "total": 2,
#     "failed": 0
#   },
#   "_index": "i",
#   "result": "created",
#   "_version": 0,
#   "_seq_no": 0
# }

GET /_cat/indices

# 200 OK
# yellow open i SgJRojXESTqWk6tJ9-l-Ww 5 1 0 0 460b 460b
# 

DELETE /i

# 200 OK
# {
#   "acknowledged": true
# }

PUT /i/_doc/someid?version=0&version_type=external
{}

# 201 Created
# {
#   "_type": "_doc",
#   "_primary_term": 1,
#   "_id": "someid",
#   "_shards": {
#     "successful": 1,
#     "total": 2,
#     "failed": 0
#   },
#   "_index": "i",
#   "result": "created",
#   "_version": 0,
#   "_seq_no": 0
# }

GET /_cat/indices

# 200 OK
# yellow open i gyPe__pUS9-1ZOpKbfnOOA 5 1 0 0 460b 460b
# 

Can you check GET /_cat/indices at each stage? The index UUID is reported in the error message you quote, and in the output from GET /_cat/indices, and this should change if an index is deleted and then created again.

I checked it

yellow open unittest-contacts-3 E7WAwQriReCcfwcwkleX9Q 3 2 5 0 39kb 39kb
yellow open stpze-contacts-3    PvmFtcibTfmBCu8C7iQ43Q 3 2 0 0 1.4kb 1.4kb
yellow open unittest-contacts-3 E7WAwQriReCcfwcwkleX9Q 3 2 5 0 39kb 39kb
yellow open stpze-contacts-3    1d6WNfXwSs-Ly1KfXocFPw 3 2 0 0 690b 690b
yellow open unittest-contacts-3 E7WAwQriReCcfwcwkleX9Q 3 2 5 0 39kb 39kb
yellow open stpze-contacts-3    XoG_xbP9TmOZn4VZaWp0gw 3 2 0 0 690b 690b

three times. It seems as if the uuid changes, if I remove the docker container and completely restart elasticsearch it works one time after which it returns to the version_conflict_engine_exception

The weird thing is, if I remove the version_type: external it works as expected.

Additionally, I always insert my testdata with a starting version of 1, if I instead insert it with a starting version of 2 it also seems to work fine.

Ok I do believe I have worked out why this is happening.

We are deleting the index, but after that we are trying to delete all the entries we had in the index. This means that the elasticsearch cluster receives DELETE requests of entries which are not existing but I suppose if versioning is set to external it keeps track of those anyways even if they where never part of the cluster.

If we then insert them afterwards they will just be set as DELETED.

Ok yes deleting an externally-versioned document from an index that does not exist will actually create the index in order to record the deletion:

GET /_cat/indices

# 200 OK

DELETE /i/_doc/someid?version=0&version_type=external

# 404 Not Found
# {
#   "_type": "_doc",
#   "_primary_term": 1,
#   "_id": "someid",
#   "_shards": {
#     "successful": 1,
#     "total": 2,
#     "failed": 0
#   },
#   "_index": "i",
#   "result": "not_found",
#   "_version": 0,
#   "_seq_no": 0
# }

GET /_cat/indices

# 200 OK
# yellow open i Pqo2tMyMRyOTOXvFAUk0Tw 5 1 0 0 1.1kb 1.1kb
#

I suppose that makes sense, it was just a bit weird that using the internal versioning this was not a problem. I suppose it did the same thing with creating the index but assumed a higher version number once the insert was done.

Thank you for your help !

No, with internal versioning there's no need to create the index since all documents effectively have an internal version of zero anyway so we don't need to create the index to record that fact. I'm not sure exactly what you did, but here's what I observed on an empty cluster:

GET /_cat/indices

# 200 OK

DELETE /i/_doc/someid?version=0

# 400 Bad Request
# {
#   "status": 400,
#   "error": {
#     "reason": "Validation Failed: 1: illegal version value [0] for version type [INTERNAL];",
#     "root_cause": [
#       {
#         "reason": "Validation Failed: 1: illegal version value [0] for version type [INTERNAL];",
#         "type": "action_request_validation_exception"
#       }
#     ],
#     "type": "action_request_validation_exception"
#   }
# }

GET /_cat/indices

# 200 OK

DELETE /i/_doc/someid?version=1

# 404 Not Found
# {
#   "status": 404,
#   "error": {
#     "index_uuid": "_na_",
#     "resource.type": "index_expression",
#     "reason": "no such index",
#     "root_cause": [
#       {
#         "index_uuid": "_na_",
#         "resource.type": "index_expression",
#         "reason": "no such index",
#         "type": "index_not_found_exception",
#         "index": "i",
#         "resource.id": "i"
#       }
#     ],
#     "type": "index_not_found_exception",
#     "index": "i",
#     "resource.id": "i"
#   }
# }

GET /_cat/indices

# 200 OK

So no index created here.

Can you try it with &version_type=external added to it?

I think I already did in this post and showed that it creates an index as expected. It'd be useful if you could share the exact steps that are leading to the surprise you expressed here:

it was just a bit weird that using the internal versioning this was not a problem

I can't really guess what else you're doing.

Ah, sorry for the confusion. What I meant with

it was just a bit weird that using the internal versioning this was not a problem

was that I did not understand beforehand that the two versioning systems work a little different when deleting entries on a non existent cluster.

1 Like