Relation between snapshot restore and update_mapping

This is a somehow bizarre question. I really hope somebody jumps in,
because I'm losing my mind.

We've set a system by which our one-machine cluster gets updated indexes
that have been made in other clusters by restoring snapshots.

Long story short:

for a few hours, the cluster is restoring snapshots, each one of them
containing information about two indexes. of course , the "global_state"
flag is set to false, because we don't want to recover the cluster, just
those two indexes.

Say during those few hours , the cluster have restored about 500 snapshots,
one after another (there is never two restore processes at the same time).
By looking at the logs, it goes flawlessly :

[2014-06-19 00:00:01,318][INFO ][snapshots ] [Svarog]
restore [backups-1:5e51361312cb68f41e1cb1fa5597672a_ts20140618235915350570]
is done
[2014-06-19 00:00:02,363][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 00:00:08,653][INFO ][cluster.metadata ] [Svarog] [
5e51361312cb68f41e1cb1fa5597672a_ts20140617220817522348] deleting index
[2014-06-19 00:00:09,286][INFO ][cluster.metadata ] [Svarog] [
5e51361312cb68f41e1cb1fa5597672a_phonetic_ts20140617220817904810] deleting
index
[2014-06-19 00:00:09,815][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 00:00:15,570][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 00:00:15,938][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 00:00:16,208][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 00:00:20,669][INFO ][snapshots ] [Svarog]
restore [backups-1:70e3583358803e70dc60a83953aaca9e_ts20140618235930121779]
is done
[2014-06-19 00:00:21,585][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 00:00:26,992][INFO ][cluster.metadata ] [Svarog] [
70e3583358803e70dc60a83953aaca9e_ts20140617220848057264] deleting index
[2014-06-19 00:00:27,601][INFO ][cluster.metadata ] [Svarog] [
70e3583358803e70dc60a83953aaca9e_phonetic_ts20140617220848563815] deleting
index

after restoring the snapshot, outdated version of the indices are removed
(because the indices recovered from the snapshot are newer).

this goes quite well, and there is no significant load on the machine while
doing this.

but, at some poing, the cluster starts to issue "udpate_mapping" commands
with no apparent reason (I'm almost sure there's been no interaction from
outside)...
[2014-06-19 04:38:36,293][INFO ][snapshots ] [Svarog]
restore [backups-1:99cbf66451446e6fe770878e84b4349b_ts20140619043819745139]
is done
[2014-06-19 04:38:37,238][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 04:38:44,016][INFO ][cluster.metadata ] [Svarog] [
99cbf66451446e6fe770878e84b4349b_ts20140604042653951289] deleting index
[2014-06-19 04:38:44,517][INFO ][cluster.metadata ] [Svarog] [
99cbf66451446e6fe770878e84b4349b_phonetic_ts20140604042655159506] deleting
index
[2014-06-19 05:57:24,721][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 05:57:34,869][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 05:57:35,234][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 05:57:35,505][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 05:57:37,399][INFO ][snapshots ] [Svarog]
restore [backups-1:397dd5d444dc67449710bd70912f4cb7_ts20140619055720568131]
is done
[2014-06-19 05:57:38,447][INFO ][repositories ] [Svarog] update
repository [backups-1]
[2014-06-19 05:57:45,491][INFO ][cluster.metadata ] [Svarog] [
397dd5d444dc67449710bd70912f4cb7_phonetic_ts20140618095027968556] deleting
index
[2014-06-19 05:57:46,081][INFO ][cluster.metadata ] [Svarog] [
397dd5d444dc67449710bd70912f4cb7_ts20140618095028345808] deleting index
[2014-06-19 06:00:08,462][INFO ][cluster.metadata ] [Svarog] [
fe28a39135049cc846bbfca3b760f814_ts20140619060008417192] creating index,
cause [api], shards [1]/[1], mappings []
[2014-06-19 06:00:09,362][INFO ][cluster.metadata ] [Svarog] [
fe28a39135049cc846bbfca3b760f814_ts20140619060008417192] create_mapping [
product]
[2014-06-19 06:00:09,747][INFO ][cluster.metadata ] [Svarog] [
fe28a39135049cc846bbfca3b760f814_phonetic_ts20140619060009600418] creating
index, cause [api], shards [1]/[1], mappings []
[2014-06-19 06:00:10,549][INFO ][cluster.metadata ] [Svarog] [
fe28a39135049cc846bbfca3b760f814_phonetic_ts20140619060009600418]
create_mapping [product]
[2014-06-19 06:00:13,243][INFO ][cluster.metadata ] [Svarog] [
fe28a39135049cc846bbfca3b760f814_ts20140619060008417192] update_mapping [
product] (dynamic)
[2014-06-19 06:00:13,673][INFO ][cluster.metadata ] [Svarog] [
fe28a39135049cc846bbfca3b760f814_phonetic_ts20140619060009600418]
update_mapping [product] (dynamic)
[2014-06-19 06:00:58,425][INFO ][cluster.metadata ] [Svarog] [
fe28a39135049cc846bbfca3b760f814_phonetic_ts20140619040010012507] deleting
index
[2014-06-19 06:00:59,291][INFO ][cluster.metadata ] [Svarog] [
fe28a39135049cc846bbfca3b760f814_ts20140619040008838204] deleting index
[2014-06-19 06:25:04,828][INFO ][cluster.metadata ] [Svarog] [
66654c4012bfb8fc98cec3b0b638b6c4_ts20140618093606809120] update_mapping [
product] (dynamic)
[2014-06-19 06:25:06,356][INFO ][cluster.metadata ] [Svarog] [
3162d22da41ab3d2a03ebf335da66f01_ts20140618093539460372] update_mapping [
product] (dynamic)
[2014-06-19 06:25:06,846][INFO ][cluster.metadata ] [Svarog] [
ba70b834e871bd7f1f0d876f43932a62_ts20140618222652664033] update_mapping [
product] (dynamic)
[2014-06-19 06:25:08,522][INFO ][cluster.metadata ] [Svarog] [
ab65d5265a24fee6167a2e32015c61fe_ts20140618094714516439] update_mapping [
product] (dynamic)
[2014-06-19 06:25:09,261][INFO ][cluster.metadata ] [Svarog] [
9f4e64f4dfff8408b00e620cfe08b58a_ts20140618093710472780] update_mapping [
product] (dynamic)
[2014-06-19 06:25:18,898][INFO ][cluster.metadata ] [Svarog] [
0f49a41a73e3b963b7bf325fbe0ba58d_ts20140618093742713121] update_mapping [
product] (dynamic)
[2014-06-19 06:25:23,750][INFO ][cluster.metadata ] [Svarog] [
c532e126e4e550dc0157ba8ef621ddb7_ts20140618222456422608] update_mapping [
product] (dynamic)
[2014-06-19 06:25:24,526][INFO ][cluster.metadata ] [Svarog] [
43c3d2c9e1811649c7c1243052ef4c5e_ts20140618222407625325] update_mapping [
product] (dynamic)
[2014-06-19 06:25:38,543][INFO ][cluster.metadata ] [Svarog] [
ce5c212f3bc125883c7a339254fe03d3_ts20140618224054988585] update_mapping [
product] (dynamic)
[2014-06-19 06:25:42,751][INFO ][cluster.metadata ] [Svarog] [
17dd77fa5516f8c1da2f9c9df13cde00_ts20140618094334914579] update_mapping [
product] (dynamic)
[2014-06-19 06:25:43,867][INFO ][cluster.metadata ] [Svarog] [
4afb65024ac09f3981a1686f9122f843_ts20140618094441400102] update_mapping [
product] (dynamic)
[2014-06-19 06:25:47,074][INFO ][cluster.metadata ] [Svarog] [
4143c179654911efbe78a6261acc91f8_ts20140618094513352534] update_mapping [
product] (dynamic)
[2014-06-19 06:25:50,233][INFO ][cluster.metadata ] [Svarog] [
93ad32432f1a00c2013aa27708bab2da_ts20140618094510759751] update_mapping [
product] (dynamic)
[2014-06-19 06:25:52,454][INFO ][cluster.metadata ] [Svarog] [
d2465f191ddcfbddd82d841628d86878_ts20140618094444716984] update_mapping [
product] (dynamic)
[2014-06-19 06:25:54,091][INFO ][cluster.metadata ] [Svarog] [
4e405e01a7f82a78cf9bf830bb84671d_ts20140618222756661029] update_mapping [
product] (dynamic)
[2014-06-19 06:25:56,396][INFO ][cluster.metadata ] [Svarog] [
0272a607c6b06ad0e443b798a10cace4_ts20140618222810691364] update_mapping [
product] (dynamic)
[2014-06-19 06:25:57,788][INFO ][cluster.metadata ] [Svarog] [
cf27f88571b48a2bb357c88df5cb715f_ts20140618094543983235] update_mapping [
product] (dynamic)
[2014-06-19 06:25:58,975][INFO ][cluster.metadata ] [Svarog] [
53153c78c92f5ecda529dd895f15d227_ts20140618094642402327] update_mapping [
product] (dynamic)
[2014-06-19 06:25:59,914][INFO ][cluster.metadata ] [Svarog] [
e41a43ff2180c2eea1a781b923af796a_ts20140618222909737193] update_mapping [
product] (dynamic)
[2014-06-19 06:25:59,915][INFO ][cluster.metadata ] [Svarog] [
10671c0a29005c9f15bbb0db1091a9a8_ts20140618223011590151] update_mapping [
product] (dynamic)
[2014-06-19 06:26:03,468][INFO ][cluster.metadata ] [Svarog] [
55c92e80ac2c253e00d0af6c011e7e0e_ts20140618094726165805] update_mapping [
product] (dynamic)
[2014-06-19 06:26:13,275][INFO ][cluster.metadata ] [Svarog] [
79a0f5fbe69deebe3038fbb0d07e0c08_ts20140619002940062583] update_mapping [
product] (dynamic)
[2014-06-19 06:26:15,315][INFO ][cluster.metadata ] [Svarog] [
33a6dd5a6d1a6d16dee3f5cdcdd4f7d2_ts20140618094713129723] update_mapping [
product] (dynamic)
[2014-06-19 06:26:18,856][INFO ][cluster.metadata ] [Svarog] [
fdf931d145319a13827f7bfe438940bc_ts20140618095023902736] update_mapping [
product] (dynamic)

see how , starting at 6.25, the clusters issues "update_mapping [product]"
lines for almost every index in the cluster with no apparent reason.

The only thing that comes to mind is that, after a restore snapshot
operation is complete, there is a "update_mapping" command pending that the
cluster starts by its own whenever it decides its convenient, and since
there's been a lot of restoring, the "update_mapping" queue is quite long,
so the moment the cluster starts doing it , they are indeed a lot (and puts
a lot of pressure on the cluster, of course).

Is that true? is an "update_mapping" command scheduled for the future -to
be issued whenever the cluster estimates convenient- every time a snapshot
is restored???

is there any other reason that command may be issued???

txs!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ef07f3f2-af83-4738-a5a8-ef3e5b917307%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I just discoverd these strange "update_mapping" loglines come from a
completely unrelated thing, so please take this post as invalid and accept
my apologies.

On Thursday, June 19, 2014 1:21:32 PM UTC-4, JoeZ99 wrote:

This is a somehow bizarre question. I really hope somebody jumps in,
because I'm losing my mind.

We've set a system by which our one-machine cluster gets updated indexes
that have been made in other clusters by restoring snapshots.

Long story short:

for a few hours, the cluster is restoring snapshots, each one of them
containing information about two indexes. of course , the "global_state"
flag is set to false, because we don't want to recover the cluster, just
those two indexes.

Say during those few hours , the cluster have restored about 500
snapshots, one after another (there is never two restore processes at the
same time). By looking at the logs, it goes flawlessly :

[2014-06-19 00:00:01,318][INFO ][snapshots ] [Svarog]
restore [backups-1:5e51361312cb68f41e1cb1fa5597672a_ts20140618235915350570
] is done
[2014-06-19 00:00:02,363][INFO ][repositories ] [Svarog]
update repository [backups-1]
[2014-06-19 00:00:08,653][INFO ][cluster.metadata ] [Svarog] [
5e51361312cb68f41e1cb1fa5597672a_ts20140617220817522348] deleting index
[2014-06-19 00:00:09,286][INFO ][cluster.metadata ] [Svarog] [
5e51361312cb68f41e1cb1fa5597672a_phonetic_ts20140617220817904810]
deleting index
[2014-06-19 00:00:09,815][INFO ][repositories ] [Svarog]
update repository [backups-1]
[2014-06-19 00:00:15,570][INFO ][repositories ] [Svarog]
update repository [backups-1]
[2014-06-19 00:00:15,938][INFO ][repositories ] [Svarog]
update repository [backups-1]
[2014-06-19 00:00:16,208][INFO ][repositories ] [Svarog]
update repository [backups-1]
[2014-06-19 00:00:20,669][INFO ][snapshots ] [Svarog]
restore [backups-1:70e3583358803e70dc60a83953aaca9e_ts20140618235930121779
] is done
[2014-06-19 00:00:21,585][INFO ][repositories ] [Svarog]
update repository [backups-1]
[2014-06-19 00:00:26,992][INFO ][cluster.metadata ] [Svarog] [
70e3583358803e70dc60a83953aaca9e_ts20140617220848057264] deleting index
[2014-06-19 00:00:27,601][INFO ][cluster.metadata ] [Svarog] [
70e3583358803e70dc60a83953aaca9e_phonetic_ts20140617220848563815]
deleting index

after restoring the snapshot, outdated version of the indices are removed
(because the indices recovered from the snapshot are newer).

this goes quite well, and there is no significant load on the machine
while doing this.

but, at some poing, the cluster starts to issue "udpate_mapping" commands
with no apparent reason (I'm almost sure there's been no interaction from
outside)...
[2014-06-19 04:38:36,293][INFO ][snapshots ] [Svarog]
restore [backups-1:99cbf66451446e6fe770878e84b4349b_ts20140619043819745139
] is done
[2014-06-19 04:38:37,238][INFO ][repositories ] [Svarog]
update repository [backups-1]
[2014-06-19 04:38:44,016][INFO ][cluster.metadata ] [Svarog] [
99cbf66451446e6fe770878e84b4349b_ts20140604042653951289] deleting index
[2014-06-19 04:38:44,517][INFO ][cluster.metadata ] [Svarog] [
99cbf66451446e6fe770878e84b4349b_phonetic_ts20140604042655159506]
deleting index
[2014-06-19 05:57:24,721][INFO ][repositories ] [Svarog]
update repository [backups-1]
[2014-06-19 05:57:34,869][INFO ][repositories ] [Svarog]
update repository [backups-1]
[2014-06-19 05:57:35,234<span style="color: #660;" class="styled
...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/949304b9-eba4-4328-badf-00f8288c36a3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.