Kibana - HTTP 500 error - circuit_breaking_exception

Hello,

I hope you and your loved ones are safe and healthy.

I have Kibana and Elasticsearch installed on the same VM. I am getting following error post successful login to Kibana instance using root privileged account.

{"statusCode":500,"error":"Internal Server Error","message":"[parent] Data too large, data for [<http_request>] would be [1068854558/1019.3mb], which is larger than the limit of [1020054732/972.7mb], real usage: [1068853976/1019.3mb], 
new bytes reserved: [582/582b], usages [request=64/64b, fielddata=62236/60.7kb, in_flight_requests=582/582b, model_inference=0/0b, accounting=17229426/16.4mb]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1068854558/1019.3mb], which is larger than the limit of [1020054732/972.7mb],
 real usage: [1068853976/1019.3mb], new bytes reserved: [582/582b], usages [request=64/64b, fielddata=62236/60.7kb, in_flight_requests=582/582b, model_inference=0/0b, accounting=17229426/16.4mb], with { bytes_wanted=1068854558 & bytes_limit=1020054732 & durability=\"PERMANENT\" }"}

Status of my cluster is green:

image

I have tried restarting Kibana service but the problem persists. What can I do to offset it? Is this a memory issue - JVM Heap?

Here are my current settings:

    "defaults": {
        "indices.analysis.hunspell.dictionary.ignore_case": "false",
        "indices.analysis.hunspell.dictionary.lazy": "false",
        "indices.breaker.accounting.limit": "100%",
        "indices.breaker.accounting.overhead": "1.0",
        "indices.breaker.fielddata.limit": "40%",
        "indices.breaker.fielddata.overhead": "1.03",
        "indices.breaker.fielddata.type": "memory",
        "indices.breaker.request.limit": "60%",
        "indices.breaker.request.overhead": "1.0",
        "indices.breaker.request.type": "memory",
        "indices.breaker.total.limit": "95%",
        "indices.breaker.total.use_real_memory": "true",
        "indices.breaker.type": "hierarchy",
        "indices.cache.cleanup_interval": "1m",
        "indices.fielddata.cache.size": "-1b",
        "indices.id_field_data.enabled": "true",
        "indices.lifecycle.history_index_enabled": "true",
        "indices.lifecycle.poll_interval": "10m",
        "indices.lifecycle.step.master_timeout": "30s",
        "indices.mapping.dynamic_timeout": "30s",
        "indices.mapping.max_in_flight_updates": "10",
        "indices.memory.index_buffer_size": "10%",
        "indices.memory.interval": "5s",
        "indices.memory.max_index_buffer_size": "-1",
        "indices.memory.min_index_buffer_size": "48mb",
        "indices.memory.shard_inactive_time": "5m",
        "indices.queries.cache.all_segments": "false",
        "indices.queries.cache.count": "10000",
        "indices.queries.cache.size": "10%",
        "indices.query.bool.max_clause_count": "1024",
        "indices.query.query_string.allowLeadingWildcard": "true",
        "indices.query.query_string.analyze_wildcard": "false",
        "indices.recovery.internal_action_long_timeout": "1800000ms",
        "indices.recovery.internal_action_timeout": "15m",
        "indices.recovery.max_bytes_per_sec": "40mb",
        "indices.recovery.max_concurrent_file_chunks": "2",
        "indices.recovery.max_concurrent_operations": "1",
        "indices.recovery.recovery_activity_timeout": "1800000ms",
        "indices.recovery.retry_delay_network": "5s",
        "indices.recovery.retry_delay_state_sync": "500ms",
        "indices.replication.initial_retry_backoff_bound": "50ms",
        "indices.replication.retry_timeout": "60s",
        "indices.requests.cache.expire": "0ms",
        "indices.requests.cache.size": "1%",
        "indices.store.delete.shard.timeout": "30s"

There are few discuss posts around this . Please refer:

Hard to know ...May be increase the heap allocated to elasticsearch?may be bumping your heap size would help ?

thanks
Rashmi

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.