Problem with loading dashboards - Data too large

I've been having a problem lately with my elastica stack that was installed on kubernetes. Well, when trying to visualize myself data in kiban when I select a longer period of time (more data) it pops up this error:

Request error: circuit_breaking_exception, [parent] Data too large, data for [indices:data/write/update[s]] would be [31381782974/29.2gb], which is larger than the limit of [30601641984/28.5gb], real usage: [31381782528/29.2gb], new bytes reserved: [446/446b], usages [inflight_requests=19944/19.4kb, request=2307688833/2.1gb, fielddata=186790714/178.1mb, eql_sequence=0/0b, model_inference=0/0b]

What is the best way to eliminate these errors?

  1. add more ram? I currently have 3 nodes with 64GB RAM each (30GB on JVM)
  2. add more nodes?
  3. reduce the amount of data displayed on the dashboard?

Or maybe something else?

Hi @PustyB,

Welcome back! Are you able to limit the amount of data displayed on your dashboard?

I can, however, want to have a summary of, for example, the last two years to compare myself.
On this dashboard I have several views.

The indexes from which it takes data are saved daily as logs to a separate index.
i.e. for example: logs_2024_01_04, logs_2024_01_05

The indexes are about 80-100mb each

However, I should aggregate these logs every month or every year? Can this help in loading the data?

without knowing the specifics of your use case, I would start by

  • Increasing the memory allocation for the es pods
  • Reducing the size of the request or breaking it into smaller parts * Use transform to pre aggregate, you still need memory
  • Optimizing the data structure or query to use less memory

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.