Monitoring production cluster with Kibana and Metricbeat

Hi there,

we run an Elastic Stack version 6.8.0 with a basic licence.

Like a few others we have issues monitoring the production cluster in Kibana.
As recommended we have set up a separate monitoring cluster including a separate Kibana instance. All monitoring data from the production cluster is shipped via Metricbeat. Thus there are some metricbeat-* indices in Elasticsearch (monitoring).
Hitting the Monitoring section in Kibana (monitoring) is says " Monitoring is currently off" and we need to turn it on. Doing so monitors the monitoring cluster itself.

We have seen this solution but they use HTTP exporters which is undesired.

Some maybe relevant Kibana settings:

elasticsearch.hosts: <monitoring cluster> (if we change this to <production cluster> it says "No monitoring data found", which makes totally sense, right?)
xpack.monitoring.kibana.collection.enabled: false

And some Elasticsearch settings:

xpack.monitoring.collection.enabled: false

What are we missing?

You will need to specify the information on the monitoring cluster in your kibana.yml of the Kibana instance which is part of your primary cluster.


Hello and thanks for the reply.

We want to use a separate Kibana instance to monitor the production cluster including the production Kibana instance. If I get your suggestion right, we would use the production Kibana instance for monitoring, but that is undesired.

Hi again,

even if I configure the production Kibana instance so that xpack.monitoring.elasticsearch.hosts points to the monitoring cluster, no monitoring visualizations show up. It always says "We couldn't activate monitoring. No monitoring data found.". The monitoring cluster (single node actually) has an index named metricbeat-6.8.4- 2019.11.12 filled with metrics received from production cluster. Why doesn't Kibana read from that index? What setting is missing?

Thank you

It is still not working. Here is what I did.
Following this guide step by step:

  1. Set up the Elasticsearch cluster you want to use as the monitoring cluster

Done. I have set up an Elasticsearch container for monitoring in our Docker cluster. These are its settings: "monitoring"
http.port: 9299
transport.tcp.port: 9399 "monitoring"
cluster.remote.connect: false
discovery.zen.minimum_master_nodes: 1
network.publish_host: ""
node.master: true true
node.ingest: true
xpack.monitoring.collection.enabled: false

1.a. (Optional) Verify that the collection of monitoring data is disabled on the monitoring cluster

Check, see last line of config.

Configure your production cluster to collect data and send it to the monitoring cluster

Use Metricbeat. There we go:

  1. Enable the collection of monitoring data

Our production cluster settings now looks like

    "persistent": {
        "xpack": {
            "monitoring": {
                "collection": {
                    "enabled": "true"
    "transient": {}
  1. Install Metricbeat on each Elasticsearch node in the production cluster.
  2. Enable the Elasticsearch module in Metricbeat on each Elasticsearch node.
  3. Configure the Elasticsearch module in Metricbeat.

Installed and configured on the two data nodes of the production cluster. /etc/metricbeat/modules.d/elasticsearch.yml looks like

- module: elasticsearch
  - ccr
  - cluster_stats
  - index
  - index_recovery
  - index_summary
  - ml_job
  - node_stats
  - shard
  period: 10s
  - localhost:9200
  xpack.enabled: true
  1. Identify where to send the monitoring data.

Makes perfect sense. This is the /etc/metricbeat/metricbeat.yml:

  enabled: true
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  reload.period: 10s

name: mynode
logging.files.permissions: 420
  1. Start Metricbeat on each node.

Done. Everything seems to work. Log says
Connection to backoff(elasticsearch( established

Data is getting shipped smoothly. Running

health status index                          uuid                   pri rep docs.count docs.deleted store.size
green  open   .monitoring-es-6-mb-2019.12.09 uVqajPYoTGOltsOJslZNaw   1   0        639            0        1mb            1mb
green  open   .kibana_1                      d0Cxg_HJTKmO-0wVo5j79w   1   0          4            0     17.2kb         17.2kb
green  open   .kibana_task_manager           LzVtyQ_GQuSvIaoWKvXpqQ   1   0          2            0     12.5kb         12.5kb

  1. Disable the default collection of Elasticsearch monitoring metrics.

Gonna do that. Our production cluster settings looks like this now:

    "persistent": {
        "xpack": {
            "monitoring": {
                "elasticsearch": {
                    "collection": {
                        "enabled": "false"
                "collection": {
                    "enabled": "true"
    "transient": {}
  1. View the monitoring data in Kibana.

For that I have deployed a separated Kibana instance in the same Docker cluster with the following settings: "monitoring"
server.port: 5699
elasticsearch.hosts: ""
  1. Open Kibana in your web browser.
  2. In the side navigation, click Monitoring.

If you are using a separate monitoring cluster, you do not need to turn on data collection. The dashboards appear when there is data in the monitoring cluster.

Doing so only shows up this annoying screen
Actually turning on monitoring simply leads to monitoring the monitoring cluster.

Please help me finding the mistake.
Please support.
Best regards

Hi, I'd like to chime in that I am having the exact same problem as the one described in the thread.

I have configured Metricbeat to report metrics to a distinct "monitoring" cluster. My Kibana instance is attached to the monitoring cluster. I have enabled the following settings in kibana.yml in an attempt to get my production server monitors to display in my monitoring Kibana.

xpack.monitoring.ui.enabled: true
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: ["http://monitoring-es:9200"]

What am I missing? Why is Kibana not seeing the monitoring indexes present on the monitoring cluster?

Finally our monitoring cluster is up and running and smoothly monitors our production cluster.

That was the crucial mistake.
Our data nodes are in fact data-only nodes and it turns out, that Metricbeat has to collect data from the master node to make Kibana recognize the logs as valid monitoring data; or something like that.

I actually found a different solution. What was key for me was this setting, in modules.d/elasticsearch.yml:

xpack.enabled: true

I found this setting on the page for the Elasticsearch module in Metricbeat here:

It isn't a documented setting, it's just thrown in as a comment after-the-fact in their example configuration.

But this was the key setting for us. Enabling xpack monitoring in the Elasticsearch module on each node and then running metricbeat setup was what started showing metricbeat data in the Monitoring tab of Kibana.

Yes, I was wondering about that myself. Luckily, I took it from the beginning in our configuration.
Glad to hear it works for you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.