Errors on bulk delete of child

Hi All,

I'm working with the latest build of ES (just downloaded yesterday
morning from source) and I've come across an issue where when I
attempt to do a bulk delete of a child document, I'm getting an error
saying versions don't match (even through I don't specify a version,
which I understood to mean use the latest). I put a curl example at

Any help would really be appreciated,

Matt

Hi Matt

I'm working with the latest build of ES (just downloaded yesterday
morning from source) and I've come across an issue where when I
attempt to do a bulk delete of a child document, I'm getting an error
saying versions don't match (even through I don't specify a version,
which I understood to mean use the latest). I put a curl example at
bulk delete child not working · GitHub

I've tried your script both on 0.15.1 and the latest from master (from
today).

I don't get any errors. All works fine.

But I wonder if adding a cluster_health 'wait_for_status' pause after
your refresh might help?

clint

clint,

Thanks for the reply! I'm definitely getting an error on the final delete
everytime I try it (all the other command succeed as expected):

{"took":9,"items":[{"delete":{"_index":"err_test_real","_type":"results","_id":"8a08dfb3af854f07b72b04b977f27f2a:81","error":"VersionConflictEngineException[[err_test_real][4]
[results][8a08dfb3af854f07b72b04b977f27f2a:81]:
version conflict, current
[-1], required [2]]"}}]}

I suppose I could try the cluster_health check (I'll have to dig into how
that's done), but I was under the impression that doing a refresh on the
index waiting until everything was flushed out before it returns, which is
why that call is there before the delete (or am I incorrect on the refresh
use?)

On Mar 2, 2011 10:34am, Clinton Gormley clinton@iannounce.co.uk wrote:

Hi Matt

I'm working with the latest build of ES (just downloaded yesterday

morning from source) and I've come across an issue where when I

attempt to do a bulk delete of a child document, I'm getting an error

saying versions don't match (even through I don't specify a version,

which I understood to mean use the latest). I put a curl example at

bulk delete child not working · GitHub

I've tried your script both on 0.15.1 and the latest from master (from

today).

I don't get any errors. All works fine.

But I wonder if adding a cluster_health 'wait_for_status' pause after

your refresh might help?

clint

clint,

I just tried adding a curl
-XGET 'http://localhost:9200/_cluster/health?wait_for_status=yellow'
(yellow because I only have 1 node in the test setup so no replicas) and
still get the same error. I'm going to try pulling the latest and see if it
still occurs with that version

Thanks for the help anyway!

Matt

On Mar 2, 2011 10:55am, purplegherkin@gmail.com wrote:

clint,

Thanks for the reply! I'm definitely getting an error on the final delete
everytime I try it (all the other command succeed as expected):

{"took":9,"items":[{"delete":{"_index":"err_test_real","_type":"results","_id":"8a08dfb3af854f07b72b04b977f27f2a:81","error":"VersionConflictEngineException[[err_test_real][4]
[results][8a08dfb3af854f07b72b04b977f27f2a:81]:
version conflict, current
[-1], required [2]]"}}]}

I suppose I could try the cluster_health check (I'll have to dig into how
that's done), but I was under the impression that doing a refresh on the
index waiting until everything was flushed out before it returns, which
is why that call is there before the delete (or am I incorrect on the
refresh use?)

On Mar 2, 2011 10:34am, Clinton Gormley clinton@iannounce.co.uk> wrote:

Hi Matt

I'm working with the latest build of ES (just downloaded yesterday

morning from source) and I've come across an issue where when I

attempt to do a bulk delete of a child document, I'm getting an error

saying versions don't match (even through I don't specify a version,

which I understood to mean use the latest). I put a curl example at

bulk delete child not working · GitHub

I've tried your script both on 0.15.1 and the latest from master (from

today).

I don't get any errors. All works fine.

But I wonder if adding a cluster_health 'wait_for_status' pause after

your refresh might help?

clint

Hi Matt

Thanks for the reply! I'm definitely getting an error on the final
delete everytime I try it (all the other command succeed as expected):

{"took":9,"items":[{"delete":{"_index":"err_test_real","_type":"results","_id":"8a08dfb3af854f07b72b04b977f27f2a:81","error":"VersionConflictEngineException[[err_test_real][4] [results][8a08dfb3af854f07b72b04b977f27f2a:81]: version conflict, current [-1], required [2]]"}}]}

I retried with a cluster of 3 nodes, and it still works. I get this:
{"took":3,"items":[{"delete":{"_index":"err_test_real","_type":"results","_id":"8a08dfb3af854f07b72b04b977f27f2a:81","_version":2,"ok":true}}]}

I suppose I could try the cluster_health check (I'll have to dig into
how that's done), but I was under the impression that doing a refresh
on the index waiting until everything was flushed out before it
returns, which is why that call is there before the delete (or am I
incorrect on the refresh use?)

I think it should, but I know there have been issues with it in the
past.

Try upgrading to the latest master from today (kimchy moves at quite a
speed, so yesterday's "latest" is today's old news)

And perhaps you have something non-default in your config?

clint

Clint,

I just finished downloading the latest, building now

I have nothing that's non-default set in my config, using all the out of
the box settings right now.

Matt

On Mar 2, 2011 11:01am, Clinton Gormley clinton@iannounce.co.uk wrote:

Hi Matt

Thanks for the reply! I'm definitely getting an error on the final

delete everytime I try it (all the other command succeed as expected):

{"took":9,"items":[{"delete":{"_index":"err_test_real","_type":"results","_id":"8a08dfb3af854f07b72b04b977f27f2a:81","error":"VersionConflictEngineException[[err_test_real][4]
[results][8a08dfb3af854f07b72b04b977f27f2a:81]: version conflict, current
[-1], required [2]]"}}]}

I retried with a cluster of 3 nodes, and it still works. I get this:

{"took":3,"items":[{"delete":{"_index":"err_test_real","_type":"results","_id":"8a08dfb3af854f07b72b04b977f27f2a:81","_version":2,"ok":true}}]}

I suppose I could try the cluster_health check (I'll have to dig into

how that's done), but I was under the impression that doing a refresh

on the index waiting until everything was flushed out before it

returns, which is why that call is there before the delete (or am I

incorrect on the refresh use?)

I think it should, but I know there have been issues with it in the

past.

Try upgrading to the latest master from today (kimchy moves at quite a

speed, so yesterday's "latest" is today's old news)

And perhaps you have something non-default in your config?

clint

Clint,

Well, interesting results. It definitely still occurs with the latest from
source, BUT it doesn't happen every time I run the script (dropping the
index in between runs, refreshing and waiting for green status), I can
typically get 1 execution of the script to work from a fresh install,
usually the 1st or 2nd try succeeds, then I get the error

On Mar 2, 2011 11:09am, purplegherkin@gmail.com wrote:

Clint,

I just finished downloading the latest, building now

I have nothing that's non-default set in my config, using all the out of
the box settings right now.

Matt

On Mar 2, 2011 11:01am, Clinton Gormley clinton@iannounce.co.uk> wrote:

Hi Matt

Thanks for the reply! I'm definitely getting an error on the final

delete everytime I try it (all the other command succeed as expected):

{"took":9,"items":[{"delete":{"_index":"err_test_real","_type":"results","_id":"8a08dfb3af854f07b72b04b977f27f2a:81","error":"VersionConflictEngineException[[err_test_real][4]
[results][8a08dfb3af854f07b72b04b977f27f2a:81]: version conflict, current
[-1], required [2]]"}}]}

I retried with a cluster of 3 nodes, and it still works. I get this:

{"took":3,"items":[{"delete":{"_index":"err_test_real","_type":"results","_id":"8a08dfb3af854f07b72b04b977f27f2a:81","_version":2,"ok":true}}]}

I suppose I could try the cluster_health check (I'll have to dig into

how that's done), but I was under the impression that doing a refresh

on the index waiting until everything was flushed out before it

returns, which is why that call is there before the delete (or am I

incorrect on the refresh use?)

I think it should, but I know there have been issues with it in the

past.

Try upgrading to the latest master from today (kimchy moves at quite a

speed, so yesterday's "latest" is today's old news)

And perhaps you have something non-default in your config?

clint

On Wed, 2011-03-02 at 17:36 +0000, purplegherkin@gmail.com wrote:

Clint,

Well, interesting results. It definitely still occurs with the latest
from source, BUT it doesn't happen every time I run the script
(dropping the index in between runs, refreshing and waiting for green
status), I can typically get 1 execution of the script to work from a
fresh install, usually the 1st or 2nd try succeeds, then I get the
error

matt - I'd open an issue on
Issues · elastic/elasticsearch · GitHub with a recreation.

clint

Thanks Clint, I opened the issue with a link to the gist. Appreciate the
help regardless!

On Mar 2, 2011 11:43am, Clinton Gormley clinton@iannounce.co.uk wrote:

On Wed, 2011-03-02 at 17:36 +0000, purplegherkin@gmail.com wrote:

Clint,

Well, interesting results. It definitely still occurs with the latest

from source, BUT it doesn't happen every time I run the script

(dropping the index in between runs, refreshing and waiting for green

status), I can typically get 1 execution of the script to work from a

fresh install, usually the 1st or 2nd try succeeds, then I get the

error

matt - I'd open an issue on

Issues · elastic/elasticsearch · GitHub with a recreation.

clint

Heya,

I think I know where the problem comes from. Basically, the delete API does not accept a parent parameter, just a routing parameter. So, the parent value should be set on the routing in case of delete. But, the delete API should support parent as well (which will simply automatically set the routing), here is the issue: Delete API: Allow to set _parent on it (will simply set the routing value) · Issue #742 · elastic/elasticsearch · GitHub.

-shay.banon
On Wednesday, March 2, 2011 at 7:54 PM, purplegherkin@gmail.com wrote:

Thanks Clint, I opened the issue with a link to the gist. Appreciate the help regardless!

On Mar 2, 2011 11:43am, Clinton Gormley clinton@iannounce.co.uk wrote:

On Wed, 2011-03-02 at 17:36 +0000, purplegherkin@gmail.com wrote:

Clint,

Well, interesting results. It definitely still occurs with the latest

from source, BUT it doesn't happen every time I run the script

(dropping the index in between runs, refreshing and waiting for green

status), I can typically get 1 execution of the script to work from a

fresh install, usually the 1st or 2nd try succeeds, then I get the

error

matt - I'd open an issue on

Issues · elastic/elasticsearch · GitHub with a recreation.

clint

Shay,

I just saw the reply on the issue, I'll make the change to _routing and let
you know

Matt

On Mar 2, 2011 2:00pm, Shay Banon shay.banon@elasticsearch.com wrote:

Heya,

I think I know where the problem comes from. Basically, the delete API
does not accept a parent parameter, just a routing parameter. So, the
parent value should be set on the routing in case of delete. But, the
delete API should support parent as well (which will simply automatically
set the routing), here is the issue:
Delete API: Allow to set _parent on it (will simply set the routing value) · Issue #742 · elastic/elasticsearch · GitHub.

-shay.banon

On Wednesday, March 2, 2011 at 7:54 PM, purplegherkin@gmail.com wrote:

Thanks Clint, I opened the issue with a link to the gist. Appreciate the
help regardless!

On Mar 2, 2011 11:43am, Clinton Gormley clinton@iannounce.co.uk> wrote:

On Wed, 2011-03-02 at 17:36 +0000, purplegherkin@gmail.com wrote:

Clint,

Well, interesting results. It definitely still occurs with the latest

from source, BUT it doesn't happen every time I run the script

(dropping the index in between runs, refreshing and waiting for green

status), I can typically get 1 execution of the script to work from a

fresh install, usually the 1st or 2nd try succeeds, then I get the

error

matt - I'd open an issue on

Issues · elastic/elasticsearch · GitHub with a recreation.

clint

Just finished testing, works great, thanks!

On Mar 2, 2011 2:57pm, purplegherkin@gmail.com wrote:

Shay,

I just saw the reply on the issue, I'll make the change to _routing and
let you know

Matt

On Mar 2, 2011 2:00pm, Shay Banon shay.banon@elasticsearch.com> wrote:

Heya,

I think I know where the problem comes from. Basically, the delete API
does not accept a parent parameter, just a routing parameter. So, the
parent value should be set on the routing in case of delete. But, the
delete API should support parent as well (which will simply automatically
set the routing), here is the issue:
Delete API: Allow to set _parent on it (will simply set the routing value) · Issue #742 · elastic/elasticsearch · GitHub.

-shay.banon

On Wednesday, March 2, 2011 at 7:54 PM, purplegherkin@gmail.com wrote:

Thanks Clint, I opened the issue with a link to the gist. Appreciate
the help regardless!

On Mar 2, 2011 11:43am, Clinton Gormley clinton@iannounce.co.uk> wrote:

On Wed, 2011-03-02 at 17:36 +0000, purplegherkin@gmail.com wrote:

Clint,

Well, interesting results. It definitely still occurs with the
latest

from source, BUT it doesn't happen every time I run the script

(dropping the index in between runs, refreshing and waiting for
green

status), I can typically get 1 execution of the script to work from
a

fresh install, usually the 1st or 2nd try succeeds, then I get the

error

matt - I'd open an issue on

Issues · elastic/elasticsearch · GitHub with a
recreation.

clint