I have setup APM Server along with EBK stack. APM server has following configuration
apm-server.yml
apm-server:
host: "0.0.0.0:8200"
frontend:
enabled: false
setup.template.settings:
index:
number_of_shards: 1
codec: best_compression
setup.dashboards.enabled: false
setup.kibana:
host: "kibana:5601"
output.elasticsearch:
hosts: ["elasticsearch:9200"]
indices:
- index: "apm-%{[beat.version]}-sourcemap"
when.contains:
processor.event: "sourcemap"
- index: "apm-%{[beat.version]}-error-%{+yyyy.MM.dd}"
when.contains:
processor.event: "error"
- index: "apm-%{[beat.version]}-transaction-%{+yyyy.MM.dd}"
when.contains:
processor.event: "transaction"
- index: "apm-%{[beat.version]}-span-%{+yyyy.MM.dd}"
when.contains:
processor.event: "span"
Following deployment manifest
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: apm-server
namespace: kube-logging
labels:
app: apm-server
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: apm-server
template:
metadata:
labels:
app: apm-server
spec:
containers:
- name: apm-server
image: docker.elastic.co/apm/apm-server:7.11.0
ports:
- containerPort: 8200
name: apm-port
volumeMounts:
- name: apm-server-config
mountPath: /usr/share/apm-server/apm-server.yml
readOnly: true
subPath: apm-server.yml
volumes:
- name: apm-server-config
configMap:
name: apm-server-config
Application is pumping 2K TPU to APM server which causing Internal queue full errors at application side. I want to setup Horizontal Pod Autoscaling (HPA) for APM Server in Kubernetes, for which I need to understand how can I detect when internal queue is full.
Any help is appreciated.
Kibana version: docker.elastic.co/kibana/kibana:7.10.1
Elasticsearch version: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
APM Server version: docker.elastic.co/apm/apm-server:7.11.0
APM Agent language and version: ^5.10.1
(Python)
Original install method (e.g. download page, yum, deb, from source, etc.) and version: Kubernetes