Kibana version: 7.5.2
Elasticsearch version: 7.5.2
APM Server version: 7.6.0
APM Agent language and version: Java 1.12.0
Browser version:
Original install method (e.g. download page, yum, deb, from source, etc.) and version: kubernetes (elastic operator and elastic-apm helm chart)
We are outputing directly to elasticsearch
Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
The APM pods network output traffic (700Mbit/s) is a lot more than the input traffic (50Mbit/s).
Please tell us how is it possible for the queues to fill when we are sending more network data than incoming network data ? I mean we are sending more data than what we are getting.
the max queue size on the java app is around 5120 and we have around 300 instances connecting to our APM server.
There are no rejections in write index queue in elasticsearch.
We are indexing an average of 35000 events per second on the elasticsearch side.
We have 1 apm server pod and 2 elasticsearch pods (resource counts are below)
here's the related config:
setup.template.settings:
index.number_of_shards: 4
index.number_of_routing_shards: 28
queue:
mem:
events: 5000000
flush.min_events: 0
flush.timeout: 1s
output.elasticsearch:
hosts: ["elasticsearch-es-http.elastic-system.svc.cluster.local:9200"]
worker: 30
bulk_max_size: 20000
Resources for services:
apmServerResources (1 count)
limits:
cpu: "15"
memory: 120Gi
requests:
cpu: "15"
memory: 120Gi
elasticSearchResources (2 in count)
limits:
cpu: "15"
memory: 120Gi
requests:
cpu: "15"
memory: 120Gi
Provide logs and/or server output (if relevant):
"response_code": 503, "error": "queue is full"