Hello Elastic,
I've seen different types of topics in this board that certain releases of metricbeat had memory leaks and other issues.
Currently, I'm preparing some evaluations for a research project that also involves performance measurements with elastic. It came to my eye that for releases 7.13 and 8.0.1 of metricbeat, significant CPU usage spikes occur every 10~ seconds, as depicted below.
To deploy metricbeat, I use Kubernetes with the following deployment configuration.
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-metricbeat-config
labels:
app: "metricbeat-metricbeat"
chart: "metricbeat-8.0.1"
release: "metricbeat"
data:
metricbeat.yml: |
metricbeat.modules:
- module: system
period: 1s
metricsets:
- cpu
- load
output.elasticsearch:
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
hosts: ["https://redacted"]
ssl.verification_mode: "none"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metricbeat-metricbeat
labels:
app: "metricbeat-metricbeat"
chart: "metricbeat-8.0.1"
release: "metricbeat"
spec:
replicas: 1
selector:
matchLabels:
app: "metricbeat-metricbeat"
release: "metricbeat"
template:
metadata:
annotations:
configChecksum: 6df01dd26e8b3b12a163562df0499a59a15623181e033ead1965a114271577f
name: "metricbeat-metricbeat"
labels:
app: "metricbeat-metricbeat"
chart: "metricbeat-8.0.1"
release: "metricbeat"
spec:
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- springboot
topologyKey: kubernetes.io/hostname
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "metricbeat-metricbeat"
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 30
volumes:
- name: elastic-certificates
secret:
secretName: elastic-certificates
- name: metricbeat-config
configMap:
defaultMode: 0600
name: metricbeat-metricbeat-config
- name: data
hostPath:
path: /var/lib/metricbeat-metricbeat-default-data
type: DirectoryOrCreate
- name: varrundockersock
hostPath:
path: /var/run/docker.sock
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
containers:
- name: "metricbeat"
image: "docker.elastic.co/beats/metricbeat:8.0.1"
imagePullPolicy: "IfNotPresent"
args:
- "-e"
- "-E"
- "http.enabled=true"
- "--system.hostfs=/hostfs"
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
metricbeat test output
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
resources:
limits:
cpu: 1000m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
key: username
name: elastic-credentials
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: elastic-credentials
envFrom:
[]
securityContext:
privileged: false
runAsUser: 0
volumeMounts:
- name: elastic-certificates
mountPath: /usr/share/metricbeat/config/certs
- name: metricbeat-config
mountPath: /usr/share/metricbeat/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: data
mountPath: /usr/share/metricbeat/data
- name: varrundockersock
mountPath: /var/run/docker.sock
readOnly: true
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
Is that considered normal behavior?