{"statusCode":500,"error":"Internal Server Error","message":"[parent] Data too large, data for [indices:data/read/get[s]] would be [32115499902/29.9gb], which is larger than the limit of [31621696716/29.4gb], real usage: [32115499744/29.9gb], new bytes reserved: [158/158b], usages [request=16784/16.3kb, fielddata=10094/9.8kb, in_flight_requests=26202187842/24.4gb, accounting=2264280/2.1mb]: [circuit_breaking_exception] [parent] Data too large, data for [indices:data/read/get[s]] would be [32115499902/29.9gb], which is larger than the limit of [31621696716/29.4gb], real usage: [32115499744/29.9gb], new bytes reserved: [158/158b], usages [request=16784/16.3kb, fielddata=10094/9.8kb, in_flight_requests=26202187842/24.4gb, accounting=2264280/2.1mb], with { bytes_wanted=32115499902 & bytes_limit=31621696716 & durability=\"TRANSIENT\" }"}
Hello guy, I am stuck in this error for weeks. After 85% migrating user to new microservices app. We are starting to have huge in_flight_request and throw circuit_breaking_exception error. I am using Elastic Cloud Kubernetes (ECK) 7.8.1. I had customized the threadpool values and node indices level tuning but it didnt help (I have commented all customized values in configmap) so I reverted all to default configuration except for JVM Heap. I set JVM heap 50% of container memory(Hot: 31gb, Warm: 31gb, Cold: 16gb). But still not resolved the issues. We have implemented Hot War Cold architecture for index lifecycle, but it ddidnt help to resolved the issues. Our log size per day with replica is 350gb for the application alone (not included APM, metricbeat, filebeat,etc). Our log data retention is 30day for Delete phase. I am wondering why my in_flight_request is so huge 24.4gb which then throws [circuitbreakingexception] error? Do I have to upgrade data nodes horizontally by adding new nodes solved this issues? because JVM max memory can't go beyond 32gb as suggested by elastic docs. Please helped me to solved this issue. By the way, Golang microservices logs send logs directly to elasticsearch using golang esapi without leveraging logstash because our microservices is in different kubernetes cluster with Elasticsearch. Here is ECK yaml.
kind: Elasticsearch
metadata:
name: eck
namespace: eck
spec:
nodeSets:
- config:
node.data: false
node.ingest: false
node.master: true
node.ml: false
xpack.monitoring.collection.enabled: true
#node.processors: 2
count: 3
name: master
podTemplate:
metadata: {}
spec:
containers:
- name: elasticsearch
resources:
limits:
memory: 16000Mi
cpu: 2000m
requests:
memory: 15000Mi
cpu: 1500m
env:
- name: ES_JAVA_OPTS
value: "-Xms8g -Xmx8g"
initContainers:
- name: sysctl
command:
- sh
- "-c"
- |
sysctl -w vm.max_map_count=262144
bin/elasticsearch-plugin remove repository-s3
bin/elasticsearch-plugin install --batch repository-s3
echo $AWS_ACCESS_KEY_ID | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key
echo $AWS_SECRET_ACCESS_KEY | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: access-key
name: axisnet-s3-keys
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: secret-key
name: axisnet-s3-keys
securityContext:
privileged: true
nodeSelector:
target: master-node
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2
- config:
node.data: true
node.ingest: false
node.master: false
node.ml: true
xpack.monitoring.collection.enabled: true
node.attr.data: hot
thread_pool.snapshot.max: 4
# thread_pool.write.size: 9
# thread_pool.write.queue_size: 2000
# thread_pool.search.size: 13
# thread_pool.search.queue_size: 4000
# indices.memory.index_buffer_size: 30%
# indices.queries.cache.size: 20%
# indices.requests.cache.size: 4%
# indices.breaker.total.use_real_memory: true
# indices.breaker.total.limit: 95%
# indices.fielddata.cache.size: 30%
# indices.requests.cache.expire: 1h
# network.breaker.inflight_requests.limit: 50%
# node.processors: 8
count: 3
name: data-hot
podTemplate:
spec:
containers:
- name: elasticsearch
resources:
limits:
memory: 64000Mi
cpu: 8000m
requests:
memory: 62000Mi
cpu: 7000m
env:
- name: ES_JAVA_OPTS
value: "-Xms31g -Xmx31g"
initContainers:
- name: sysctl
command:
- sh
- "-c"
- |
sysctl -w vm.max_map_count=262144
swapoff -a
bin/elasticsearch-plugin remove repository-s3
bin/elasticsearch-plugin install --batch repository-s3
echo $AWS_ACCESS_KEY_ID | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key
echo $AWS_SECRET_ACCESS_KEY | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: access-key
name: axisnet-s3-keys
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: secret-key
name: axisnet-s3-keys
securityContext:
privileged: true
nodeSelector:
target: data-hot
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: gp2
- config:
node.data: true
node.ingest: true
node.master: false
node.ml: true
xpack.monitoring.collection.enabled: true
node.attr.data: warm
thread_pool.snapshot.max: 4
# thread_pool.write.size: 9
# thread_pool.write.queue_size: 2000
# thread_pool.search.size: 13
# thread_pool.search.queue_size: 4000
# indices.queries.cache.size: 20%
# indices.requests.cache.size: 4%
# indices.memory.index_buffer_size: 30%
# indices.breaker.total.use_real_memory: true
# indices.breaker.total.limit: 95%
# indices.fielddata.cache.size: 30%
# indices.requests.cache.expire: 1h
# network.breaker.inflight_requests.limit: 50%
# node.processors: 8
count: 3
name: data-warm
podTemplate:
metadata: {}
spec:
containers:
- name: elasticsearch
resources:
limits:
memory: 64000Mi
cpu: 8000m
requests:
memory: 62000Mi
cpu: 7000m
env:
- name: ES_JAVA_OPTS
value: "-Xms31g -Xmx31g"
initContainers:
- name: sysctl
command:
- sh
- "-c"
- |
sysctl -w vm.max_map_count=262144
swapoff -a
bin/elasticsearch-plugin remove repository-s3
bin/elasticsearch-plugin install --batch repository-s3
echo $AWS_ACCESS_KEY_ID | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key
echo $AWS_SECRET_ACCESS_KEY | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: access-key
name: axisnet-s3-keys
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: secret-key
name: axisnet-s3-keys
securityContext:
privileged: true
nodeSelector:
target: data-warm
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1500Gi
storageClassName: gp2
- config:
node.data: true
node.ingest: true
node.master: false
node.ml: false
xpack.monitoring.collection.enabled: true
node.attr.data: cold
# thread_pool.snapshot.max: 2
# thread_pool.write.size: 5
# thread_pool.write.queue_size: 2000
# thread_pool.search.size: 7
# thread_pool.search.queue_size: 4000
# indices.queries.cache.size: 20%
# indices.requests.cache.size: 4%
# indices.memory.index_buffer_size: 20%
# indices.breaker.total.use_real_memory: true
# indices.breaker.total.limit: 95%
# indices.fielddata.cache.size: 20%
# indices.requests.cache.expire: 1h
# network.breaker.inflight_requests.limit: 50%
# node.processors: 4
count: 3
name: data-cold
podTemplate:
metadata: {}
spec:
containers:
- name: elasticsearch
resources:
limits:
memory: 32000Mi
cpu: 4000m
requests:
memory: 30000Mi
cpu: 3000m
env:
- name: ES_JAVA_OPTS
value: "-Xms16g -Xmx16g"
initContainers:
- name: sysctl
command:
- sh
- "-c"
- |
sysctl -w vm.max_map_count=262144
swapoff -a
bin/elasticsearch-plugin remove repository-s3
bin/elasticsearch-plugin install --batch repository-s3
echo $AWS_ACCESS_KEY_ID | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key
echo $AWS_SECRET_ACCESS_KEY | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: access-key
name: axisnet-s3-keys
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: secret-key
name: axisnet-s3-keys
securityContext:
privileged: true
nodeSelector:
target: data-cold
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5000Gi
storageClassName: st1
version: 7.8.1```