CircuitBreakingException: [parent] Data too large IN ES 7.3.2

Hi,

We recently started to get multiple shards failures in one of our Kibana dashboards, and on checking the response it was found that we getting CircuitBreakingException: [parent] Data too large, exact error is given for one the failing shards:

[parent] Data too large, data for [<reused_arrays>] would be [30702353496/28.5gb], which is larger than the limit of [30477372620/28.3gb], real usage: [30275107928/28.1gb], new bytes reserved: [427245568/407.4mb], usages [request=8547313888/7.9gb, fielddata=2160099334/2gb, in_flight_requests=557860/544.7kb, accounting=5491483789/5.1gb]

We thought increasing the JVM memory from 30gb to 32gb would help because in all the errors we observed that that need ~0.5 gb more but after that the error came again hitting the higher limit as well shown below:

[parent] Data too large, data for [<reused_arrays>] would be [32543928400/30.3gb], which is larger than the limit of [32517482086/30.2gb], real usage: [32539734096/30.3gb], new bytes reserved: [4194304/4mb], usages [request=7900326232/7.3gb, fielddata=987293390/941.5mb, in_flight_requests=124226/121.3kb, accounting=6740987241/6.2gb]

Could someone guide what needs to be done in order to resolve this?

Welcome to our community! :smiley:

Please note that 7.3.2 is EOL and no longer supported, you should be looking to upgrade as a matter of urgency.

What is the output from the _cluster/stats?pretty&human API?

Hi @warkolm thanks for the quick response. Here is the requested output:

{
    "_nodes" : {
      "total" : 59,
      "successful" : 59,
      "failed" : 0
    },
    "cluster_name" : "<cluster_name>",
    "cluster_uuid" : "<cluster_uuid>",
    "timestamp" : 1680079777358,
    "status" : "yellow",
    "indices" : {
      "count" : 3883,
      "shards" : {
        "total" : 24093,
        "primaries" : 9438,
        "replication" : 1.5527654164017801,
        "index" : {
          "shards" : {
            "min" : 2,
            "max" : 37,
            "avg" : 6.204738604172032
          },
          "primaries" : {
            "min" : 1,
            "max" : 12,
            "avg" : 2.4305949008498584
          },
          "replication" : {
            "min" : 1.0,
            "max" : 36.0,
            "avg" : 1.4102498068503735
          }
        }
      },
      "docs" : {
        "count" : 154037443196,
        "deleted" : 856066455
      },
      "store" : {
        "size_in_bytes" : 295660730571367
      },
      "fielddata" : {
        "memory_size_in_bytes" : 91402391736,
        "evictions" : 0
      },
      "query_cache" : {
        "memory_size_in_bytes" : 121875992605,
        "total_count" : 5933471640,
        "hit_count" : 23528444,
        "miss_count" : 5909943196,
        "cache_size" : 2548551,
        "cache_count" : 2930225,
        "evictions" : 381674
      },
      "completion" : {
        "size_in_bytes" : 0
      },
      "segments" : {
        "count" : 527344,
        "memory_in_bytes" : 242205415032,
        "terms_memory_in_bytes" : 111208514487,
        "stored_fields_memory_in_bytes" : 99294829104,
        "term_vectors_memory_in_bytes" : 0,
        "norms_memory_in_bytes" : 3026575488,
        "points_memory_in_bytes" : 21975152977,
        "doc_values_memory_in_bytes" : 6700342976,
        "index_writer_memory_in_bytes" : 10348377068,
        "version_map_memory_in_bytes" : 26650381,
        "fixed_bit_set_memory_in_bytes" : 81808,
        "max_unsafe_auto_id_timestamp" : 1680069725230,
        "file_sizes" : { }
      }
    },
    "nodes" : {
      "count" : {
        "total" : 59,
        "coordinating_only" : 16,
        "data" : 38,
        "ingest" : 0,
        "master" : 5
      },
      "versions" : [
        "7.3.2"
      ],
      "os" : {
        "available_processors" : 1128,
        "allocated_processors" : 1128,
        "names" : [
          {
            "name" : "Linux",
            "count" : 59
          }
        ],
        "pretty_names" : [
          {
            "pretty_name" : "CentOS Linux 7 (Core)",
            "count" : 59
          }
        ],
        "mem" : {
          "total_in_bytes" : 7572909932544,
          "free_in_bytes" : 393274753024,
          "used_in_bytes" : 7179635179520,
          "free_percent" : 5,
          "used_percent" : 95
        }
      },
      "process" : {
        "cpu" : {
          "percent" : 839
        },
        "open_file_descriptors" : {
          "min" : 1725,
          "max" : 19385,
          "avg" : 11338
        }
      },
      "jvm" : {
        "max_uptime_in_millis" : 4650638391,
        "versions" : [
          {
            "version" : "11.0.14",
            "vm_name" : "OpenJDK 64-Bit Server VM",
            "vm_version" : "11.0.14+9-LTS",
            "vm_vendor" : "Red Hat, Inc.",
            "bundled_jdk" : true,
            "using_bundled_jdk" : false,
            "count" : 2
          },
          {
            "version" : "11.0.1",
            "vm_name" : "OpenJDK 64-Bit Server VM",
            "vm_version" : "11.0.1+13",
            "vm_vendor" : "Oracle Corporation",
            "bundled_jdk" : true,
            "using_bundled_jdk" : false,
            "count" : 57
          }
        ],
        "mem" : {
          "heap_used_in_bytes" : 965504897216,
          "heap_max_in_bytes" : 1435644002304
        },
        "threads" : 11075
      },
      "fs" : {
        "total_in_bytes" : 550212250632192,
        "free_in_bytes" : 253707456679936,
        "available_in_bytes" : 253707456679936
      },
      "plugins" : [
        {
          "name" : "opendistro_alerting",
          "version" : "1.3.0.1",
          "elasticsearch_version" : "7.3.2",
          "java_version" : "1.8",
          "description" : "Amazon OpenDistro alerting plugin",
          "classname" : "com.amazon.opendistroforelasticsearch.alerting.AlertingPlugin",
          "extended_plugins" : [
            "lang-painless"
          ],
          "has_native_controller" : false
        },
        {
          "name" : "opendistro_performance_analyzer",
          "version" : "1.3.0.0",
          "elasticsearch_version" : "7.3.2",
          "java_version" : "1.8",
          "description" : "Performance Analyzer Plugin",
          "classname" : "com.amazon.opendistro.elasticsearch.performanceanalyzer.PerformanceAnalyzerPlugin",
          "extended_plugins" : [ ],
          "has_native_controller" : false
        },
        {
          "name" : "opendistro_security",
          "version" : "1.3.0.0",
          "elasticsearch_version" : "7.3.2",
          "java_version" : "1.8",
          "description" : "Provide access control related features for Elasticsearch 7",
          "classname" : "com.amazon.opendistroforelasticsearch.security.OpenDistroSecurityPlugin",
          "extended_plugins" : [ ],
          "has_native_controller" : false
        },
        {
          "name" : "opendistro-job-scheduler",
          "version" : "1.3.0.0",
          "elasticsearch_version" : "7.3.2",
          "java_version" : "1.8",
          "description" : "Open Distro for Elasticsearch job schduler plugin",
          "classname" : "com.amazon.opendistroforelasticsearch.jobscheduler.JobSchedulerPlugin",
          "extended_plugins" : [ ],
          "has_native_controller" : false
        },
        {
          "name" : "opendistro_sql",
          "version" : "1.3.0.0",
          "elasticsearch_version" : "7.3.2",
          "java_version" : "1.8",
          "description" : "Open Distro for Elasticsearch SQL",
          "classname" : "com.amazon.opendistroforelasticsearch.sql.plugin.SqlPlug",
          "extended_plugins" : [ ],
          "has_native_controller" : false
        },
        {
          "name" : "opendistro_index_management",
          "version" : "1.3.0.1",
          "elasticsearch_version" : "7.3.2",
          "java_version" : "1.8",
          "description" : "Open Distro Index State Management Plugin",
          "classname" : "com.amazon.opendistroforelasticsearch.indexstatemanagement.IndexStateManagementPlugin",
          "extended_plugins" : [
            "opendistro-job-scheduler"
          ],
          "has_native_controller" : false
        }
      ],
      "network_types" : {
        "transport_types" : {
          "com.amazon.opendistroforelasticsearch.security.ssl.http.netty.OpenDistroSecuritySSLNettyTransport" : 59
        },
        "http_types" : {
          "com.amazon.opendistroforelasticsearch.security.http.OpenDistroSecurityHttpServerTransport" : 59
        }
      },
      "discovery_types" : {
        "zen" : 59
      },
      "packaging_types" : [
        {
          "flavor" : "oss",
          "type" : "tar",
          "count" : 59
        }
      ]
    }
  }
  

OpenSearch/OpenDistro are AWS run products and differ from the original Elasticsearch and Kibana products that Elastic builds and maintains. You may need to contact them directly for further assistance.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

You have a good amount of data per node, so that will drive heap usage. If you are using time-based indices I would recommend you forcemerge old indices that are no longer written to down to a single segment as that can reduce heap usage.

You do also have a number of third-party plugins that will drive and affect heap usage. These are not supported here so I would recommend seeking guidance about these in the Opendistro/Opensearch forums.

This will cause problems and result in less heap being available if you exceed the threshold for use of compressed pointers, which is a bit below 32GB. If you increase heap, ensure compressed pointers are still being used.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.