Version conflict, document already exists (current version [1])

With this config:
output {
if ([type] == "state" ) {
elasticsearch {
hosts => [ ]
index => "%{[meta][target][index]}"
document_id => "%{[@metadata][target][id]}"
action => "update"
doc_as_upsert => true
id => "logfilter-pprd-01.internal.cls.vt.edu_es_state"
template_overwrite => false
manage_template => false
retry_on_conflict => 5
}
}
}

I get this error on any update (creates work):
[2018-07-09T15:10:44.971-0400][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>409, :action=>["update", {:_id=>"f4:4d:30:60:8a:31", :_index=>"state_mac", :_type=>"state", :_routing=>nil, :_retry_on_conflict=>1}, 2018-07-09T19:09:45.000Z %{host} %{message}], :response=>{"update"=>{"_index"=>"state_mac", "_type"=>"state", "_id"=>"f4:4d:30:60:8a:31", "status"=>409, "error"=>{"type"=>"version_conflict_engine_exception", "reason"=>"[state][f4:4d:30:60:8a:31]: version conflict, document already exists (current version [1])", "index_uuid"=>"huFaDcR5RgeG92F5S8F9kw", "shard"=>"2", "index"=>"state_mac"}}}}

I know the document already exists, it's an update, not a create. I've played around with retries and various version settings.

Anyone have any ideas on how to disable the version check? It shouldn't even be checking. The docs (https://www.elastic.co/blog/elasticsearch-versioning-support) say it's optional, but not how to disable it.

elastic/logstash v5.6.10. Everything works otherwise. Updates using the elastic update api (via curl) work.

This is blocking our migration to 5.6 (and thence to 6.x).

Has anyone seen anything like this before, please?

Does anyone have a working 5.6 config that does partial updates (update/upsert)?

This works in 5.4 perfectly. It still works via the API (curl). Maybe one of the options has changed?

The 5.x and 6.x documentation both say that version checking is optional, and not active unless turned on. This looks like a bug in the logstash elasticsearch output plugin.

I know this is a rare use case, but can someone please take a look at this?

This is a documented feature and it's not working.

Can someone please take a look at this? It's been weeks.

Do you have a working config then? Would it be possible to share it so I can compare with mine?

The configuration I used was

input { generator { count => 1 message => 'Foo' } }
output { stdout { codec => rubydebug } }
output {
    elasticsearch {
        hosts => [ "localhost" ]
        index => "state_mac"
        document_id => "f4:4d:30:60:8a:31"
        action => "update"
        doc_as_upsert => true
        id => "logfilter-pprd-01.internal.cls.vt.edu_es_state"
        template_overwrite => false
        manage_template => false
        retry_on_conflict => 5
    }
}

If I change the generator message to be Bar, then it updates just fine. The event looks like this

   {
  "_index": "state_mac",
  "_type": "doc",
  "_id": "f4:4d:30:60:8a:31",
  "_version": 4,
  "_score": null,
  "_source": {
    "@version": "1",
    "host": "...",
    "sequence": 0,
    "message": "Bar",
    "@timestamp": "2018-07-30T20:36:56.969Z"
  },
  "fields": {
    "@timestamp": [
      "2018-07-30T20:36:56.969Z"
    ]
  },
  "sort": [
    1532983016969
  ]
}

I'd take a close look at the event you are trying to index (using rubydebug to stdout), and the event you are trying to overwrite (in the JSON tab in Kibana/Discover) and see if anything jumps out.

What happens when the two versions update different fields? (say src.ip and dst.ip)

I have looked at the raw document, nothing leaped out at me. I'll pull a few versions.

(sorry for the formatting. The preformatted text button doesn't work)
This one (where there was no existing record) worked:
{
"@timestamp" => 2018-07-31T13:14:37.000Z,
"src" => {
"mac" => "c0:42:d0:54:b1:a1"
},
"meta" => {
"filter" => [
[0] "24-netrecon_state",
[1] "71-mac-normalize",
[2] "72-ip-normalize"
],
"input" => "24-netrecon_state",
"filterhost" => "logfilter-pprd-01.internal.cls.vt.edu",
"filtertime" => 1533042927,
"type" => "edu.vt.nis.netrecon",
"target" => {
"index" => "state_mac"
}
},
"@version" => "1",
"host" => [],
"prospector" => {
"type" => "log"
},
"type" => "state",
"netrecon" => {
"fact" => {}
},
"fields" => {
"group" => "laa.netrecon"
},
"device" => {
"name" => "VTC-CB-1-1",
"interface" => "Po1",
"ip" => "172.16.246.36"
},
"tags" => [
[0] "state"
]
}

And this one generated a 409:
{
"@timestamp" => 2018-07-31T13:14:52.000Z,
"src" => {
"mac" => "c0:42:d0:54:b1:a1"
},
"meta" => {
"filter" => [
[0] "24-netrecon_state",
[1] "71-mac-normalize",
[2] "72-ip-normalize"
],
"input" => "24-netrecon_state",
"filterhost" => "logfilter-pprd-01.internal.cls.vt.edu",
"filtertime" => 1533042927,
"type" => "edu.vt.nis.netrecon",
"target" => {
"index" => "state_mac"
}
},
"@version" => "1",
"host" => [],
"prospector" => {
"type" => "log"
},
"type" => "state",
"netrecon" => {
"fact" => {}
},
"fields" => {
"group" => "laa.netrecon"
},
"device" => {
"name" => "VTC-BA-2-1",
"interface" => "Po1",
"ip" => "172.16.246.32"
},
"tags" => [
[0] "state"
]
}

I also have examples where it's not writing to the same fields (assembling sendmail event logs into transactions), but those are more complex. I get the same failure here and I'd like to have other documents that added other things to this one. Circuit number, username, etc.

Bump to keep this from expiring.

Bump to keep this from expiring.

Very odd. I'll give it a try, but I'll need to get to 6.x first. This started when I went from 5.4.1 to 5.6.10.

Partial updates worked after that?

Weekly bump. Please, will someone take a look at this bug?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.