Slow search and tiny segments memory_in_bytes after upgrade >= es7.9

Hi
After upgrading from ES 7.8.x to ES >=7.9, I noticed that segments *_memory_in_bytes became very small and search is very slow

It is seen by stats:

ES 7.8.1
/_nodes/stats

segments: 
{
count: 7,
memory_in_bytes: 2964866,
terms_memory_in_bytes: 2952362,
stored_fields_memory_in_bytes: 7160,
term_vectors_memory_in_bytes: 0,
norms_memory_in_bytes: 0,
points_memory_in_bytes: 4812,
doc_values_memory_in_bytes: 532,
index_writer_memory_in_bytes: 0,
version_map_memory_in_bytes: 0,
fixed_bit_set_memory_in_bytes: 0,
max_unsafe_auto_id_timestamp: -1,
file_sizes: { }
},

/_cat/segment

index      shard prirep ip         segment generation docs.count docs.deleted    size size.memory committed searchable version compound
test-index 0     p      172.26.0.3 _0               0     181104            0 313.6mb      564872 true      true       8.5.1   true

After merge:

test-index 0     p      172.26.0.3 _7               7    1000000            0   1.6gb     1834676 true      false      8.5.1   false

Search {"query": {"term": {"id": xxx}} is performed at ~1000doc/s

ES >= 7.9
_nodes/stats

segments: 
{
count: 7,
memory_in_bytes: 28444,
terms_memory_in_bytes: 20832,
stored_fields_memory_in_bytes: 7080,
term_vectors_memory_in_bytes: 0,
norms_memory_in_bytes: 0,
points_memory_in_bytes: 0,
doc_values_memory_in_bytes: 532,
index_writer_memory_in_bytes: 0,
version_map_memory_in_bytes: 0,
fixed_bit_set_memory_in_bytes: 0,
max_unsafe_auto_id_timestamp: -1,
file_sizes: { }
},

_cat/segment

index      shard prirep ip         segment generation docs.count docs.deleted    size size.memory committed searchable version compound
test-index 0     p      172.28.0.7 _0               0     185307            0 320.5mb        4228 false     false      8.6.0   true

After merge:

test-index 0     p      172.28.0.7 _7               7    1000000            0 1.6gb        7396 true      true       8.6.0   false

Search {"query": {"term": {"id": xxx}} is performed at ~500doc/s


Configs

I run ES in docker, official images from docker.elastic.co

Index:

{
   "settings": {
       "store.type": "niofs",
       "refresh_interval": "600s",
       "translog.flush_threshold_size": "512mb",

       "number_of_shards": 1,
       "number_of_replicas": 0,

   },
   'mappings': {
           "dynamic": False,
           'properties': {
               'id': {'type': 'keyword'},
               field0: {'type': 'keyword'},
               field1: {'type': 'keyword'},
               field2: {'type': 'keyword'},
               field3: {'type': 'keyword'},
               field4: {'type': 'keyword'},
               field5: {'type': 'keyword'},
               field6: {'type': 'keyword'},
               field7: {'type': 'keyword'},
               field8: {'type': 'keyword'},
               field9: {'type': 'keyword'},
               field10: {'type': 'keyword'},
           }}
}

Data:
id, fieldX - is just a unique short string, e.g. str (uuid.uuid4 ()) * 2 (but it doesn't matter, other values ​​are possible)


I tried to change config index/cluster, but did not notice a positive result.
What could be the problem? Is this a bug or a feature?

1 Like

While testing, I realized that it happened after https://issues.apache.org/jira/browse/LUCENE-9257(commit e7a61ea)
Asked the developers on IRC # lucene

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.