i have changed my question
when i restart my application in k8s , i found error message in heartbeat log
won't start runner: monitor ID user-center is configured for multiple monitors! IDs must be unique values.
What does it mean?
i have changed my question
when i restart my application in k8s , i found error message in heartbeat log
won't start runner: monitor ID user-center is configured for multiple monitors! IDs must be unique values.
What does it mean?
This means you've set the id
field to the same value for two or more monitors in heartbeat. The id
field should be unique for a given monitor.
But I saw ID definition on the documentation
Note that this uniqueness is only within a given beat instance. If you want to monitor the same endpoint from multiple locations it is recommended that those heartbeat instances use the same IDs so that their results can be correlated. You can use the host.geo.name property to disambiguate them.
Does this mean ID can be redefined?
thank you
The intent of IDs is to uniquely represent a thing that you're monitoring. You may want to monitor it from multiple locations using multiple beats, so we allow that. However, there's no reason to monitor the sang thing twice from the same beat, that would be a duplicate, so we issue the error you saw.
So, while you can redefine an ID by changing the configuration associated with it, you should not. There are no controls preventing it.
you mean the error came from the same beat? fine
what should i configuration the ID field for beats in k8s
by the way i found kibana uptime does't have name options
As far as what the ID should be for k8s, you have your choice of the variables here: https://www.elastic.co/guide/en/beats/heartbeat/current/configuration-autodiscover.html#_kubernetes . You'll want it to be unique. So, you could use ${data.kubernetes.pod.uid}-${data.kubernetes.container.name}
for instance, but there's no right answer so long as you choose something that's globally unique.
Thanks for the bug report re: missing names! We actually just fixed that, it will ship in 7.4.1. We actually are removing that name dropdown in 7.5 and handling the same use case with an enhanced query bar that autosuggests fields and values (similar to APM).
Sorry for the late reply
In fact, I used ${data.kubernetes.container.name}
as a configuration it is unique
when the application restarts the problem still occurs.
This is my configuration
---
apiVersion: v1
kind: ConfigMap
metadata:
name: heartbeat-config
namespace: monitor
labels:
k8s-app: heartbeat
data:
heartbeat.yml: |-
heartbeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
templates:
- condition.and:
- equals:
kubernetes.namespace: "monitor"
- not:
equals:
kubernetes.container.name: "heartbeat"
config:
- type: http
urls: ["http://${data.host}:8080/info"]
schedule: "@every 10s"
timeout: 1s
name: "${data.kubernetes.pod.name}"
id: "${data.kubernetes. container.name}"
processors:
- add_kubernetes_metadata:
in_cluster: true
output.kafka:
hosts: ["kafka:9092"]
topic: 'heartbeat-dev'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
monitoring:
enabled: true
elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${BEAT_MONITOR_USERNAME}
password: ${BEAT_MONITOR_PASSWORD}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: heartbeat
namespace: monitor
labels:
k8s-app: heartbeat
spec:
template:
metadata:
labels:
k8s-app: heartbeat
spec:
serviceAccountName: heartbeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: heartbeat
image: registry-vpc.cn-hangzhou.aliyuncs.com/elasticstack/heartbeat:7.3.0
args: [
"-c", "/etc/heartbeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value:
- name: ELASTICSEARCH_PORT
value: ""
- name: LOGSTASH_HOST
value:
- name: LOGSTASH_PORT
value:
- name: HEARTBEAT_USERNAME
value:
- name: HEARTBEAT_PASSWORD
value:
- name: BEAT_MONITOR_USERNAME
value:
- name: BEAT_MONITOR_PASSWORD
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/heartbeat.yml
readOnly: true
subPath: heartbeat.yml
- name: data
mountPath: /usr/share/heartbeat/data
volumes:
- name: config
configMap:
defaultMode: 0600
name: heartbeat-config
# data folder stores a registry of read status for all files, so we don't send everything again on a heartbeat pod restart
- name: data
hostPath:
path: /var/lib/heartbeat-data
type: DirectoryOrCreate
Another question
When the application is restarted, it will not only receive the heartbeat of the current application, but also receive the heartbeat from the last application ip, although it is irregular.
What caused this? cache?
If it's occurring during restart, is it possible that there momentarily are two containers using the same name? Perhaps the container ID would be more appropriate here. You could concatenate the two to create something readable, but unique for the monitor name/id.
I have a hunch thay way also fix the IP issue.
My mistake, it’s not a restart but a rebuild, so I think the id will be different, and I don’t see the id field in my heartbeat index
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.