How to fix heap size problem against big data (CircuitBreakingException)

I have an issue with my cluster.
I want to know how can I fix the heap memory against big data?
Consider that I can`t increase the size or another memory... What is the solution?

What exactly is causing the heap usage? How much data do you have in the cluster? How many indices and shards? How much are you indexing per day? Are you running very memory intensive queries? Which version are you using?

Thanks for your response... Imagine I have 3 indices and 3 shards All 3 indices reach 1 TB traffic per day and always Ill get throttle on my elasticsearch because of the size of data. Heap memory goes down...
Is there any solution for heap memory? I dont want to loos my entire logs and also unfortunately, I cant increase the size of heap memory, or use another memory for that

I used elasticsearch 7.8.1

What does you workload look like? What is the full output of the cluster stats API?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.