Fielddata cache issue

Hi,

We are upgrading Elasticsearch 1.7 to 5.3, observing a weird behaviour with fielddata cache. In one of our index there are keyword datatypes with mapping similar as below
"shelf.raw":{ "doc_values":true, "type":"keyword", "eager_global_ordinals": true }
During indexation we could observe this field being cached
GET /_cat/fielddata?v

`UXi7lSWiTmqXoQYqxd5SGg 10.12.21.233 10.12.21.233 UXi7lSW shelf.raw  4.6kb`

But after ForceMerge with maximum segment as 1, then all the fielddata cache size is becoming 0.

 `UXi7lSWiTmqXoQYqxd5SGg 10.12.21.233 10.12.21.233 UXi7lSW shelf.raw 0b`

After this, even on heavy search load, the size never comes up.

since this happens after reducing the segments, Is it related to "min" settings in "fielddata_frequency_filter"

Is this an issue? How to deal with settings for type keyword?

With doc values you are not using fielddata anymore.

That might explain. But what is the problem with that?

If fielddata is not used, why do it occupies space during index time, which made us to think something not going good.

we are trying to solve Load and performance issues.

With 5.3, CPU utilisation goes high and throws error and high latency. as one of the step, we are checking the cache utilizations

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.