_source 50% bigger after reindex

Hello.

Context:
I reindexed a 7.15 index into 7.16 one and suddenly it is taking about +50% space.
By Analyze index disk usage API | Elasticsearch Guide [7.16] | Elastic I found that _source field alone is now taking +64% more disk space than before.

Problem:
What happened? I don't see any difference of index settings that could explain the increase.
I suspected a change in index.codec, but that is not showing in diff of settings either.

What was the reindex you did?
What is the mapping(s)?

I purpused the index.codec idea further.
Per Provide index level indication when index.codec is set at the node level · Issue #26130 · elastic/elasticsearch · GitHub and Add missing fields to _segments API · Issue #3160 · elastic/elasticsearch-net · GitHub index.codec should be available by Index segments api.

But no luck: both GET /srcindex/_segments?pretty and GET /dstindex/_segments?pretty return "attributes" : { "Lucene87StoredFieldsFormat.mode" : "BEST_SPEED" }


I already deleted the orignal index. But I am sure it will also happen for index I am reindexing right now - will send you (how?) mappings/settings diff after reindex completes.
I am looking into another possibility - so far I always reindexed on debian-based node. This time I reindexed on redhat-based ones.

I reindexed a similar index, details below:

Index settings+mappings diff
@@ -1,4 +1,4 @@
-GET /srcindex?pretty
+GET /dstindex?pretty
 {
-  "srcindex" : {
+  "dstindex" : {
     "aliases" : { },
@@ -109,7 +109,5 @@
         },
         "message" : {
-          "type" : "text",
-          "index" : false,
-          "norms" : false
+          "type" : "match_only_text"
         },
         "msg" : {
@@ -152,9 +150,6 @@
           "type" : "long"
         },
-        "session_id" : {
-          "type" : "keyword"
-        },
         "sessionid" : {
-          "type" : "long"
+          "type" : "keyword"
         },
         "set" : {
@@ -243,5 +238,5 @@
         "lifecycle" : {
           "name" : "ILM-LOG",
-          "rollover_alias" : ""
+          "origination_date" : "1640390409363"
         },
         "routing" : {
@@ -258,6 +253,6 @@
           "read" : "false"
         },
-        "provided_name" : "srcindex",
-        "creation_date" : "1640390409363",
+        "provided_name" : "dstindex",
+        "creation_date" : "1651492970910",
         "unassigned" : {
           "node_left" : {
@@ -275,7 +270,7 @@
         "priority" : "50",
         "number_of_replicas" : "1",
-        "uuid" : "xxxxxxxxxxxxxxxxxxxxxx",
+        "uuid" : "yyyyyyyyyyyyyyyyyyyyyy",
         "version" : {
-          "created" : "7150299"
+          "created" : "7160199"
         }
       }
Disk usage

Comparison by field:

                  srcindex[MB]  dstindex[MB]  increase [MB]
_seq_no           214971712     158950924     -560
<snip>                                        -2 to +2
timestamp         289979371     291058317     12
_field_names      -             184978352     19
_id               55045137      584582451     341
message           -             767938163     7679
_source           337282685     424764811     8748
total             670767153     83313122      16236

Comparison by type:

                  srcindex[MB]  dstindex[MB]  increase [MB]
points            540927139     497632488     -433
doc_values        148172165     146996708     -118
norms             0             0             0
inverted_index    118459334     195780484     7733
stored_fields     35005294      440580779     9053
total             670767153     83313122      16236
_cat segments

Original:

/_cat/segments/srcindex?h=index,shard,prirep,segment,generation,docs.count,docs.deleted,size,size.memory,committed,searchable,version,compound&v
index    shard prirep segment generation docs.count docs.deleted   size size.memory committed searchable version compound
srcindex 0     p      _xxaa       341509  134835092            0 21.8gb       21404 true      true       8.9.0   false
srcindex 0     r      _xxaa       341509  134835092            0 21.8gb       21404 true      true       8.9.0   false
srcindex 1     p      _xxbb       342018  134812277            0 21.8gb       21404 true      true       8.9.0   false
srcindex 1     r      _xxbb       342018  134812277            0 21.8gb       21404 true      true       8.9.0   false
srcindex 2     p      _xxcc       341409  134841646            0 21.8gb       21404 true      true       8.9.0   false
srcindex 2     r      _xxcc       341409  134841646            0 21.8gb       21404 true      true       8.9.0   false

Reindexed:

/_cat/segments/dstindex?h=index,shard,prirep,segment,generation,docs.count,docs.deleted,size,size.memory,committed,searchable,version,compound&v
index    shard prirep segment generation docs.count docs.deleted   size size.memory committed searchable version compound
dstindex 0     p      _xx            688  134835092            0 27.1gb      143212 true      true       8.11.1  false
dstindex 0     r      _xx            688  134835092            0 27.1gb      143212 true      true       8.11.1  false
dstindex 1     p      _xx            688  134812277            0 27.1gb      143212 true      true       8.11.1  false
dstindex 1     r      _xx            688  134812277            0 27.1gb      143212 true      true       8.11.1  false
dstindex 2     p      _yy           1063  134841646            0 27.1gb      143212 true      true       8.11.1  false
dstindex 2     r      _yy           1063  134841646            0 27.1gb      143212 true      true       8.11.1  false

This time _source increased only by 25%, taking about as many extra bytes as new inverted index of message.


I'll try reindexing without adding inverted index for message and see what happens.

I would suggest I would suggest If you want to compare the sizes, do a force merge to one segment on the source index and then a force merge to one segment on the destination index. After it finishes, then you'll have an apples to apples comparison.

All indices I compare are already forcemerged, sorry about the confusion.
You can verify that that in _cat segments output.

This is the actual (and in my opinion valid) comparision, using different index with similar data.
Plain reindex, no mapping changes, both indices forcemerged, still +25% size increase on _source.

Reindexing command
POST /_reindex?slices=auto&pretty
{
  "source": {
    "index": "srcindex"
  },
  "dest": {
    "index": "identmap"
  }
}
POST /identmap/_forcemerge?max_num_segments=1&pretty
{
  "_shards" : {
    "total" : 6,
    "successful" : 6,
    "failed" : 0
  }
}
Index settings+mappings diff
@@ -1,5 +1,5 @@
-GET /srcindex?pretty
+GET /identmap?pretty
 {
-  "srcindex" : {
+  "identmap" : {
     "aliases" : { },
     "mappings" : {
@@ -243,5 +243,5 @@
         "lifecycle" : {
           "name" : "SIEM-LOG",
-          "rollover_alias" : ""
+          "origination_date" : "1640390409363"
         },
         "routing" : {
@@ -258,6 +258,6 @@
           "read" : "false"
         },
-        "provided_name" : "srcindex",
-        "creation_date" : "1640390409363",
+        "provided_name" : "identmap",
+        "creation_date" : "1651742088307",
         "unassigned" : {
           "node_left" : {
@@ -275,7 +275,7 @@
         "priority" : "50",
         "number_of_replicas" : "1",
-        "uuid" : "xxxxxxxxxxxxxxxxxxxxxx",
+        "uuid" : "aaaaaaaaaaaaaaaaaaaaaa",
         "version" : {
-          "created" : "7150299"
+          "created" : "7160199"
         }
       }
Disk usage

Comparison by field:

                  srcindex[MB]  identmap[MB]  increase [MB]
_seq_no           2149          1587          -562
<snip>                                        -2 to +2
timestamp         2899          2910          11
_id               5505          5819          314
_source           33729         42095         8366
total             67077         75204         8127

Comparison by type:

                  srcindex[MB]  identmap[MB]  increase [MB]
points            5409          4974          -435
doc_values        14817         14699         -118
inverted_index    11845         11842         -3
norms             0             0             0
term_vectors      0             0             0
stored_fields     35005         43688         8683
total             67077         75204         8127
_cat segments

srcindex(original):

/_cat/segments/srcindex?h=index,shard,prirep,segment,generation,docs.count,docs.deleted,size,size.memory,committed,searchable,version,compound&v
index    shard prirep segment generation docs.count docs.deleted   size size.memory committed searchable version compound
srcindex 0     p      _xxaa       341509  134835092            0 21.8gb       21404 true      true       8.9.0   false
srcindex 0     r      _xxaa       341509  134835092            0 21.8gb       21404 true      true       8.9.0   false
srcindex 1     p      _xxbb       342018  134812277            0 21.8gb       21404 true      true       8.9.0   false
srcindex 1     r      _xxbb       342018  134812277            0 21.8gb       21404 true      true       8.9.0   false
srcindex 2     p      _xxcc       341409  134841646            0 21.8gb       21404 true      true       8.9.0   false
srcindex 2     r      _xxcc       341409  134841646            0 21.8gb       21404 true      true       8.9.0   false

identmap(reindexed)

/_cat/segments/identmap?h=index,shard,prirep,segment,generation,docs.count,docs.deleted,size,size.memory,committed,searchable,version,compound&v
index    shard prirep segment generation docs.count docs.deleted   size size.memory committed searchable version compound
identmap 0     p      _aa            697  134835092            0 24.4gb      140556 true      true       8.11.1  false
identmap 0     r      _aa            697  134835092            0 24.4gb      140556 true      true       8.11.1  false
identmap 1     p      _bb            727  134812277            0 24.4gb      140556 true      true       8.11.1  false
identmap 1     r      _bb            727  134812277            0 24.4gb      140556 true      true       8.11.1  false
identmap 2     p      _cc            699  134841646            0 24.4gb      140556 true      true       8.11.1  false
identmap 2     r      _cc            699  134841646            0 24.4gb      140556 true      true       8.11.1  false

I am sorry for all the previous unnecessary messages. I should have done all this triage before posting.

@stephenb Sorry to bother you, but how do I forcemerge a specific segment of an index?
Both Force merge API | Elasticsearch Guide [7.17] | Elastic and Force merge API | Elasticsearch Guide [8.2] | Elastic merge all segments of an index. I don't see any option to target a specific segment.

As Far as I know there is no way to target a specific segment.

Ah, okay. I though it would be weird to merge just one segment, but wanted to be sure.
So, all shards I am comparing are merged down to one segment. Is that enough for comparison?

The source index was forcemerged back in 7.15. Does it make sense to re-forcemerge it (1->1)?
So far I was trying to avoid that, because then I'd loose my test case (and some gigabytes of space)

Hi @nisow95612 I think it is up to you... I have kind of lost track of exactly what you are trying to accomplish .. apologies...

Just for a test I picked a random index, reindex it and then force merged it to 1 segment the difference in size is ~.2% which is a reasonable margin of error to me. This has been my experience over time.. I am not sure what you are seeing.

My cluster is 7.17 even though those indices are 7.15.2

GET /_cat/indices/file*156*?v&s=pri.store.size:desc&bytes=b
health status index                                     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   filebeat-7.15.2-2022.05.04-000156         Asgg7scZT16ikU08ei075g   1   1    9943814            0 9146589863     4573198998
green  open   filebeat-7.15.2-2022.05.04-000156-reindex L2Nx5t3hTNqnCvWRFzVsUQ   1   1    9943814            0 9138208269     4569024766

Yes, I have made a mess in it, sorry.

Let me try to write it down properly.
1)
I was reindexing my data to change a mapping, observed +50% increase instead of expected +15%.
By Disk usage API I found out that this increase is caused by _source being +50% bigger.
I assumed this must be due to index.codec, but it was BEST_SPEED for both.
Hence the title of this topic.
2)
Meanwhile original index was deleted, forcing me to debug different one.
That one increased only +25% in size, but that is still way more than expected.
To eliminate effect of mapping I reindexed again, copying mappings of "srcindex".
Disk usage API still reported that the main culprit is +25% increase of _source.

I documented this reindexing operation here:

I am trying to figure out why all attempts of reindexing these indices consume +25% more space,
fix the cause if possible, and then reindex to change mappings without significant comsumption of space.

@nisow95612 Not quite sure what to tell you.

I do see the lucene version is different, but I guess I wouldn't expect that to make the difference you're seeing.

Perhaps and apologies @DavidTurner to ask, perhaps David would have an idea why your indexes are bigger after reindex.

It's hard to say without seeing the actual data, but one possible explanation is that the docs are being reordered during the reindex. BEST_SPEED means the docs are compressed in 60kiB blocks using LZ4 so the compression ratio is going to be very sensitive to how similar the docs in each block are.

The segments also come from different Lucene versions. I'm not familiar enough with Lucene development to know whether there might have been any regressions between these versions.

If you're in a position to share the actual data then I think we could dig deeper. If not then unfortunately we can only guess.

1 Like

Thank you for the offer, I'll sift through the data and see if it could be shared.


Meanwhile I ran an experiment that might help:

  1. Start with a smaller index that has the same problem - let's call it ix
  2. Clone it - let's call the copy ixclone
  3. Delete one document from ixclone
  4. Forcemerge
GET /_cat/segments/ix,ixclone?v&h=index,shard,prirep,segment,generation,docs.count,docs.deleted,size,size.memory,committed,searchable,version,compound'
index   shard prirep segment generation docs.count docs.deleted  size size.memory committed searchable version compound
ix      0     p      _21b6        95010   43741735            0 6.8gb        8708 true      true       8.9.0   false
ix      1     p      _27zu       103674   43743538            0 6.8gb        8716 true      true       8.9.0   false
ix      2     p      _207c        93576   43749154            0 6.8gb        8708 true      true       8.9.0   false
ixclone 0     p      _0               0   43741735            0 6.8gb        8708 true      true       8.9.0   false
ixclone 1     p      _0               0   43743538            0 6.8gb        8716 true      true       8.9.0   false
ixclone 2     p      _2               2   43749153            0 7.7gb       45684 true      true       8.11.1  false
                                                                ^^^^^ INCREASE
  1. Voilá - segment that held the deleted document is now >10% bigger, while the rest stayed the same.

Does this, by a chance, prove reordering is not responsible for size increase, or "nope"?

Details

Both ix and ixclone actually have replicas. I removed them from the table to keep it short.

List of segments including replicas
GET /_cat/segments/ix,ixclone?v&h=index,shard,prirep,segment,generation,docs.count,docs.deleted,size,size.memory,committed,searchable,version,compound'
index   shard prirep segment generation docs.count docs.deleted  size size.memory committed searchable version compound
ix      0     p      _21b6        95010   43741735            0 6.8gb        8708 true      true       8.9.0   false
ix      0     r      _21b6        95010   43741735            0 6.8gb        8708 true      true       8.9.0   false
ix      1     p      _27zu       103674   43743538            0 6.8gb        8716 true      true       8.9.0   false
ix      1     r      _27zu       103674   43743538            0 6.8gb        8716 true      true       8.9.0   false
ix      2     p      _207c        93576   43749154            0 6.8gb        8708 true      true       8.9.0   false
ix      2     r      _207c        93576   43749154            0 6.8gb        8708 true      true       8.9.0   false
ixclone 0     p      _0               0   43741735            0 6.8gb        8708 true      true       8.9.0   false
ixclone 0     r      _0               0   43741735            0 6.8gb        8708 true      true       8.9.0   false
ixclone 1     p      _0               0   43743538            0 6.8gb        8716 true      true       8.9.0   false
ixclone 1     r      _0               0   43743538            0 6.8gb        8716 true      true       8.9.0   false
ixclone 2     r      _2               2   43749153            0 7.7gb       45684 true      true       8.11.1  false
ixclone 2     p      _2               2   43749153            0 7.7gb       45684 true      true       8.11.1  false
Shards before forcemerge
GET /_cat/shards/ixclone?v=true&h=index,sh,pr,state,sc,docs,store
index   sh pr state   sc     docs store
ixclone 1  p  STARTED  1 43743538 6.8gb
ixclone 1  r  STARTED  1 43743538 6.8gb
ixclone 2  p  STARTED  2 43749153 6.8gb
ixclone 2  r  STARTED  2 43749153 6.8gb
ixclone 0  p  STARTED  1 43741735 6.8gb
ixclone 0  r  STARTED  1 43741735 6.8gb
Shards after forcemerge
GET /_cat/shards/ixclone?v=true&h=index,sh,pr,state,sc,docs,store
index   sh pr state   sc     docs store
ixclone 1  p  STARTED  1 43743538 6.8gb
ixclone 1  r  STARTED  1 43743538 6.8gb
ixclone 2  p  STARTED  1 43749153 7.7gb
ixclone 2  r  STARTED  1 43749153 7.7gb
ixclone 0  p  STARTED  1 43741735 6.8gb
ixclone 0  r  STARTED  1 43741735 6.8gb

It does rather cast doubt on that idea, but I still don't think there's much we can do to help move this forward without seeing the data.

A colleague has come up with a plausible explanation:

Lucene 8.7 changed some compression parameters to give smaller files at the expense of apparently-small performance drops (see this blog post) but we later discovered some situations where the performance impact was not so small and partially reverted some of these changes in LUCENE_9917 which landed in 8.10. Your original indices are written with the slower-but-more-highly-compressing Lucene 8.9 and the new ones are using the faster-but-less-compression Lucene 8.11.

1 Like

That looks very plausible, thank you for figuring it out!

I guess there is nothing for me to do but to be happy that it is not +100% like with those nginx logs being compared in linked comment: [LUCENE-9917] Reduce block size for BEST_SPEED - ASF JIRA

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.