Slow increase in memory pressure

Over the past two weeks, the memory pressure on our prod cluster has been slowly ramping up higher and higher. we passed 74% last week, and are in the 80s as I write this. We are using upserts because come documents can get updates from two places, and we have a job that will update the previous night's worth of records once each. I understand that updates cause dirty memory, but am not sure how to try and battle the issue on an ongoing basis.

If memory pressure is above 75%, I'd advice upgrading the cluster.

If you want input on how to optimise to reduce overall memory usage, that's a generic Elasticsearch question better fit for the Elasticsearch-forum.

moved from the cloud-elasticsearch forum.

I'm not sure how to diagnose how to move forward. it seems like the fielddata usage of the cluster is high (about 12Gigabytes, currently). Im not sure if i can manually evict the memory, or if there are other ways to approach this problem.

currently my batch size for the upserts i mentioned is 1000. not sure if changing that would help, setting fielddata config, or both. also not sure on how to prioritize making those changes

We can evict the filed data memory using below rest API:
http://ESHOST:9200/_cache/clear [USE POST method with below body]
{ "fielddata": "true" }

This will clear the filed data cache, but again it will be filled up gradually.
The resolve this issue permanently you can use doc_value for all the fields [which should not be analyzed] on which you need to apply aggregations ... etc.
Hope this helps.

wll evicting the field data have any effect on the cluster itself? or will the operation be seamless? this is a prod system, so i want to be able to make changes without affecting accessibility to the data.

thanks a bunch for your help!

No impact on cluster original data. Only filed data cache will be cleaned.
You can also check, which filed is using max heap using : http://localhost:9200/_cat/fielddata?v&fields=*?pretty&human
After clearing the filed data, the subsequent aggregations might be slow, as it needs to created the cache again for the field required.
As I mentioned, if you use doc_values, we can get rid of this field data cache issue permanently.

Is the issue with fielddata exacerbated by having a lot of indexes or shards? currently we're splitting data into monthly indexes with 24 shards each. we have close to 4 years of data this way.

Someone has already mentioned to me that the number or shards seems high, so im curious if this could be related?

Looking for a little bit more help with this. Currently the Heap usage is around 90-95%. not sure if there is some particular settings that are misaligned in our cluster, but any help would be appreciated.

evicting fielddata helped for a while, but usage is back up, and im looking for other ways to bring down the bottom line on memory usage.