How to disable Kibana load balancer in Monitoring GUI

Hi. I am trying to implement load balancer for Kibana. I have 3-node ES cluster and I've got a problem with Monitoring GUI in Kibana.

My ES cluster:

  1. node - 40 GB available
  2. node - 40 GB available
  3. node - 40 GB available

Load balancer:
400 GB available (but it is not data node)

The problem is that Monitoring shows wrong 'disk available' value. Expected value is 120 GB (3*40 GB). But there is 520 GB in Monitoring GUI. Is it possible to disable load balancer monitoring?

Does anyone know a way to disable load balancer monitoring in Monitoring GUI?

Configuration files.
1 node:

cluster.name: "ES_CLUSTER"
node.name: ${HOSTNAME}
node.master: true
node.data: true
network.host: dev16
discovery.zen.ping.unicast.hosts: ["dev17", "dev18"]

discovery.zen.minimum_master_nodes: 2
path.data: /data_es/
path.logs: /home/es_user/programs/logs/logs_520/
path.repo: /data_es/backup/

xpack.graph.enabled: false
xpack.monitoring.enabled: true
xpack.security.enabled: false
xpack.watcher.enabled: false

2 node:

cluster.name: "ES_CLUSTER"
node.name: ${HOSTNAME}
node.master: true
node.data: true
network.host: dev17
discovery.zen.ping.unicast.hosts: ["dev16", "dev18"]

discovery.zen.minimum_master_nodes: 2
path.data: /data_es/
path.logs: /home/es_user/programs/logs/logs_520/
path.repo: /data_es/backup/

xpack.graph.enabled: false
xpack.monitoring.enabled: true
xpack.security.enabled: false
xpack.watcher.enabled: false

3 node:

cluster.name: "ES_CLUSTER"
node.name: ${HOSTNAME}
node.master: true
node.data: true
network.host: dev18
discovery.zen.ping.unicast.hosts: ["dev16", "dev17"]

discovery.zen.minimum_master_nodes: 2
path.data: /data_es/
path.logs: /home/hdp18/programs/logs/logs_520/
path.repo: /data_es/backup/

xpack.graph.enabled: false
xpack.monitoring.enabled: true
xpack.security.enabled: false
xpack.watcher.enabled: false

Load balancer:

node.master: false
node.data: false
node.ingest: false

cluster.name: "ES_CLUSTER"
node.name: ${HOSTNAME}
network.host: dev15

xpack.graph.enabled: false
xpack.monitoring.enabled: false
xpack.security.enabled: false
xpack.watcher.enabled: false

discovery.zen.ping.unicast.hosts: ["dev16", "dev17", "dev18"]

Hi @Poohowy,

Unfortunately there's nothing that you can do to impact this other than hiding the disk space from the coordinating-only node (what you're calling the load balancer node).

This is because we display the raw value reported by the cluster stats, which includes that spare space from the coordinating-only node as well as any master-only nodes. It does make sense though to only display the sum of available disk space from data nodes and I'm going to record that as an issue in our internal issue tracker for X-Pack Monitoring.

Thanks for the idea! Sorry there's no solution though.


Unrelated, but in terms of your architecture, I highly recommend setting up at least one other coordinating node so that you have some high availability. If you don't, then your own code will have downtime during rolling restarts of the single coordinating node.

Generally speaking, you don't often need coordinating-only nodes with such a "small" cluster, but it can offload some of the heap memory workload if those three data nodes are already approaching their limits (but then the solution is most likely to add more data nodes).

Hope that helps,
Chris

1 Like

It occurred to me after posting, but just as a side note: you can always observe the actual per-node disk space in the Node Listing screen. This helps to avoid the issue entirely, but it does not solve the weirdness of the summary.

Chris

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.