Shrink index from Lucene 6 to Lucene 7

hi all,
trying to shrink indices created with Lucene 6 into the newest Elasticsearch version, I'm facing this error:

nested: IllegalArgumentException[Cannot use addIndexes(Directory) with indexes that have been created by a different Lucene version. The current index was generated by Lucene 6 while one of the directories contains an index that was generated with Lucene 7]; ]

I searched a fix with no result. is there anyone that can help me?

Shrinking an index will not change its Lucene version - if the original index was Lucene 6 then the shrunken one will be too.

However, it is supposed to be possible to shrink indices of either version in Elasticsearch 6.x (and we have test cases that check that this does indeed work). Please could you share a lot more information about what you're doing? It would be good to see the commands you're running against Elasticsearch as well as the full stack trace of the exception you quoted, and any log messages from around the time of this problem as well.

I'm using the elasticsearch-py client to shrink old indices.
basically, I was executing this command:

settingsShrink = {
    "settings": {
        "index.number_of_replicas": 1,
        "index.number_of_shards": 1, 
        "index.codec": "best_compression",
        "index.routing.allocation.require._name": null,
        "index.blocks.write": null
    }
}

es.indices.shrink(index=elasticFinalIndex, target=elasticFinalShrinkIndex, body=settingsShrink, wait_for_active_shards=1)

I read that maybe using forcemerge command I could fix this issue, but when I try it I cannot decrease segments to just 1 using the command:

es.indices.forcemerge(index=elasticFinalIndex,max_num_segments=1)

This should be the full stack trace:

[2018-10-05T09:50:05,019][WARN ][o.e.i.c.IndicesClusterStateService] [saelk1] [[sroger-gtw-xfb-2018.02.12][0]] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [sroger-gtw-xfb-2018.02.12][0]: Recovery failed on {saelk1}{jSe6gdpyTuqBxfScv7SzIQ}{5rJk8R3gSOa_09IkBsZzug}{10.0.18.151}{10.0.18.151:9300}{ml.machine_memory=16828035072, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
	at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$8(IndexShard.java:2090) ~[elasticsearch-6.3.2.jar:6.3.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:626) [elasticsearch-6.3.2.jar:6.3.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_65]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_65]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_65]
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery
	at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:343) ~[elasticsearch-6.3.2.jar:6.3.2]
	at org.elasticsearch.index.shard.StoreRecovery.recoverFromLocalShards(StoreRecovery.java:123) ~[elasticsearch-6.3.2.jar:6.3.2]
	at org.elasticsearch.index.shard.IndexShard.recoverFromLocalShards(IndexShard.java:1563) ~[elasticsearch-6.3.2.jar:6.3.2]
	at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$8(IndexShard.java:2085) ~[elasticsearch-6.3.2.jar:6.3.2]
	... 4 more
Caused by: java.lang.IllegalArgumentException: Cannot use addIndexes(Directory) with indexes that have been created by a different Lucene version. The current index was generated by Lucene 6 while one of the directories contains an index that was generated with Lucene 7
	at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2850) ~[lucene-core-7.3.1.jar:7.3.1 ae0705edb59eaa567fe13ed3a222fdadc7153680 - caomanhdat - 2018-05-09 09:27:24]
	at org.elasticsearch.index.shard.StoreRecovery.addIndices(StoreRecovery.java:170) ~[elasticsearch-6.3.2.jar:6.3.2]
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromLocalShards$3(StoreRecovery.java:131) ~[elasticsearch-6.3.2.jar:6.3.2]
	at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:301) ~[elasticsearch-6.3.2.jar:6.3.2]
	at org.elasticsearch.index.shard.StoreRecovery.recoverFromLocalShards(StoreRecovery.java:123) ~[elasticsearch-6.3.2.jar:6.3.2]
	at org.elasticsearch.index.shard.IndexShard.recoverFromLocalShards(IndexShard.java:1563) ~[elasticsearch-6.3.2.jar:6.3.2]
	at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$8(IndexShard.java:2085) ~[elasticsearch-6.3.2.jar:6.3.2]
	... 4 more

Hmm, this seems surprising. Could you share the output of GET /_cat/segments from your cluster?

There might be a bit of confusion in your question. An Elasticsearch index is made of multiple shards, each of which is made of multiple segments. Shrink creates a new index with fewer shards by copying and rearranging the segments of the original index. Forcemerge tries to combine the segments within each shard together, creating new (larger) segments and removing older (smaller) ones, but does not create a new index, and does not change the number of shards or the division of the data between shards.

here you can find the output of get /_cat/segments:

index shard prirep ip segment generation docs.count docs.deleted size size.memory committed searchable version compound
roger-gtw-xfb-2018.01.08 3 p 10.0.18.151 _3s 136 599 0 1.4mb 65185 true true 7.3.1 false
roger-gtw-xfb-2018.01.08 3 r 10.0.18.202 _3s 136 599 0 1.4mb 65185 true true 7.3.1 false
roger-gtw-xfb-2018.01.08 4 p 10.0.18.151 _46 150 605 0 1.5mb 70330 true true 6.6.1 false
roger-gtw-xfb-2018.01.08 4 r 10.0.18.206 _46 150 605 0 1.5mb 70330 true true 6.6.1 false
sroger-gtw-xfb-2018.07.11 0 p 10.0.18.151 _j 19 86826 0 80.3mb 333453 true true 7.3.1 false
sroger-gtw-xfb-2018.07.11 0 r 10.0.18.206 _i 18 86826 0 80.3mb 333286 true true 7.3.1 false
roger-gtw-xfb-2018.01.06 1 r 10.0.18.206 _3c 120 488 0 1.2mb 71865 true true 6.6.1 false
roger-gtw-xfb-2018.02.03 2 p 10.0.18.151 _rn 995 17499 0 37.4mb 173632 true true 7.3.1 false
roger-gtw-xfb-2018.02.03 2 r 10.0.18.206 _rn 995 17499 0 37.4mb 173632 true true 7.3.1 false
sroger-gtw-packets-2018.09.11 0 p 10.0.18.202 _l 21 228677 0 54.5mb 141997 true true 7.3.1 false
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _0 0 53584 0 17.7mb 57977 true true 7.3.1 false
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _1 1 15287 0 5.2mb 30124 true true 7.3.1 false
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _2 2 7193 0 2.6mb 20512 true true 7.3.1 false
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _3 3 115 0 80.8kb 15642 true true 7.3.1 true
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _4 4 7 0 24.2kb 11870 true true 7.3.1 true
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _5 5 8 0 24.5kb 11863 true true 7.3.1 true
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _6 6 1 0 17.7kb 9470 true true 7.3.1 true
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _7 7 1 0 19.6kb 10820 true true 7.3.1 true
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _8 8 18 0 29.5kb 12255 true true 7.3.1 true
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _9 9 3 0 21.9kb 11550 true true 7.3.1 true
sroger-gtw-packets-2018.09.11 0 r 10.0.18.206 _a 10 1 0 17.7kb 9470 true true 7.3.1 true
sroger-gtw-packets-2018.10.01 0 r 10.0.18.202 _5 5 1784 0 654.8kb 17034 true true 7.3.1 false
sroger-gtw-packets-2018.10.01 0 r 10.0.18.202 _6 6 6 0 19kb 9511 true true 7.3.1 true
sroger-gtw-packets-2018.10.01 0 r 10.0.18.202 _7 7 1 0 17.7kb 9470 true true 7.3.1 true
sroger-gtw-packets-2018.10.01 0 r 10.0.18.202 _8 8 1839 0 657.2kb 16277 true true 7.3.1 false
sroger-gtw-packets-2018.04.05 0 r 10.0.18.206 _r 27 297805 0 67.3mb 183627 true true 7.3.1 false
sroger-gtw-packets-2018.04.06 0 p 10.0.18.151 _t 29 332505 0 74.8mb 239443 true true 7.3.1 false
sroger-gtw-packets-2018.04.06 0 r 10.0.18.206 _t 29 332505 0 74.8mb 239443 true true 7.3.1 false
sroger-gtw-packets-2018.04.07 0 r 10.0.18.202 _m 22 237588 0 53mb 134388 true true 7.3.1 false
sroger-gtw-packets-2018.04.07 0 p 10.0.18.206 _n 23 237588 0 53mb 134388 true true 7.3.1 false
sroger-gtw-packets-2018.04.08 0 p 10.0.18.151 _e 14 262372 0 57.6mb 172825 true true 7.3.1 false
sroger-gtw-packets-2018.04.08 0 r 10.0.18.206 _e 14 262372 0 57.6mb 172825 true true 7.3.1 false
shrink_roger-gtw-xfb-2018.04.08 0 p 10.0.18.151 _0 0 10747 0 16.9mb 105922 true true 7.2.1 false
shrink_roger-gtw-xfb-2018.04.08 0 p 10.0.18.151 _1 1 780 0 1.3mb 62571 true true 7.2.1 true
shrink_roger-gtw-xfb-2018.04.08 0 p 10.0.18.151 _5 5 11 0 105.4kb 57209 true true 7.2.1 true
shrink_roger-gtw-xfb-2018.04.08 0 p 10.0.18.151 _9 9 10515 0 16.6mb 107054 true true 7.2.1 false
shrink_roger-gtw-xfb-2018.04.08 0 p 10.0.18.151 _a 10 1175 0 1.9mb 64572 true true 7.2.1 false
shrink_roger-gtw-xfb-2018.04.08 0 p 10.0.18.151 _b 11 10464 0 16.4mb 100177 true true 7.2.1 false
shrink_roger-gtw-xfb-2018.04.08 0 p 10.0.18.151 _c 12 1134 0 1.9mb 64580 true true 7.2.1 false
shrink_roger-gtw-xfb-2018.04.08 0 p 10.0.18.151 _e 14 7 0 96.8kb 57177

I found somewhere (I don't remember honestly where), that in these cases I should forcemerge the original index but probably it's not correct.

This output looks truncated, but I asked around and found a similar case elsewhere, for which we opened #33826. Is it possible you:

  1. created the index
  2. upgraded Elasticsearch
  3. partially restored the index from a snapshot, or force-allocated one of its shards?

If so, the shards will have different version numbers and shrinking will not be able to combine them. Your best bet will be to reindex the problematic indices instead of shrinking them.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.