Circuit Breaking Exception Using ingest node

Hi,

After my cluster is working for a while there is a Circuit Breaker.

On Log File:
CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be larger than limit of [2230452224/2gb]]; ]

When Using CURL:
CircuitBreakingException[[parent] Data too large, data for [<http_request>] would be larger than limit of [2230452224/2gb]]; ]

I cannot get any data using CURL, only that message that it will not fit in size.

Cluster is 5 Nodes.
2 Master + Ingest
1 Master + Data
2 Data

Using Ingest Node to index documents.
Each node is 8GB RAM, Elastic JVM is 4GB on each.

How can I solve it or prevent it ?
Only full cluster restart helped me so far.....

Thanks,

Ori

But what are you exactly doing when you are getting this message? A bulk? Which size?

Filebeat is sending the Logs data to Ingest node in Elasticsearch.
I am not sure regarding the size, It just happens after a while....

Ori

What is your pipeline? What are your settings? HEAP size?
What are filebeat settings if any?

Pipelines are mainly GROK to extract fields.
Some set and remove processors
Using event timestamp for Target Index name.

Heapsize is 4GB, Total RAM in servers is 8GB.

General Filebeat Setting:
output:
elasticsearch:
hosts: ["server1:9200", "server2:9200"]
index: "my-logs"
bulk_max_size: 10000
flush_interval: 60
parameters.pipeline: "my-logs-pipeline"

Some are multi line and some are single line.

Ori

What is the meaning of those Circuit Breakers ?

Thanks,

Ori

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.