For an idle 3-node cluster (7.17.4), where all nodes are masters, i.e. listed under initial_master_nodes
, is it normal that:
-
Documents merged rate
, red-bordered on the image below, would be on the order of 1000/s on an idle cluster? -
Query time
would be zero on only one of the nodes? On the image, this would be nodees2
before andes0
after the switch, pointed at with the left white arrow. (This metric is from Prometheus'elasticsearch_indices_search_query_time_seconds
.)
One of the nodes, es0
, a Kibana (7.17.4) is also running. Is perhaps Query time
rising Kibana's doing?
All of these shots are from Grafana Cloud-provided 'ES integration'. Full PromQL queries for Documents merged rate
panel is
rate(elasticsearch_indices_merges_docs_total{job="$job",instance=~"$instance",cluster="$cluster",name=~"$name"}[$__rate_interval])
and for Query time
panel.
irate(elasticsearch_indices_search_query_time_seconds{job="$job",instance=~"$instance",cluster="$cluster",name=~"$name"}[$__rate_interval])
The indexes, and documents within them, are identical between 'now' and the start of the graphs, 24 h ago:
curl 'http://127.0.0.1:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .geoip_databases WjiXjTDRRy-qZgMnzGSGWQ 1 1 40 37 76.3mb 38.2mb
green open .apm-custom-link Dg7gUbajRzOsJ8Z64zOLxw 1 1 0 0 452b 226b
green open .apm-agent-configuration j0GCAZShRgGURj-y9x7ORw 1 1 0 0 452b 226b
green open .kibana_task_manager_7.17.4_001 QevcCzDMTbCrjroEx0GF9A 1 1 17 66376 25.8mb 13mb
green open .kibana_7.17.4_001 LCSPjV-3SnSI1Y9QHAWBZA 1 1 17 1 4.7mb 2.3mb
green open my-custom-index-2022-06-01 oHmNTGXhS-S4bPBMbh8wEQ 3 2 1 0 23kb 7.6kb
green open my-custom-index-2022-06-06 E4PlsGbKR-2hL1G2cyqj8Q 3 2 1 0 23kb 7.6kb