Impossible to use both Monitoring and DevTools Console with monitoring cluster on basic license(!)?


I am in the process of migrating production services from ESv2 (last) to ES 5.6.3.

I have encountered a huge headache trying to replicate our vanilla management/monitoring setup with the 5.x Kibana - XPack Monitoring - DevTools console.

My situation is identical I think to the one described here: Point Kibana 'Dev Tools' to remote ES cluster

At root, all my problems begin with the fact that DevTools console no longer allows any change to "Server", or, as far as I can tell, any configuration setting which allows it to interrogate a different cluster than the Monitoring plugin.

Hence it appears totally broken for working with a monitoring cluster. I think????

That is,

I have two clusters, production, and monitoring.

Production exports data via exporters to monitoring cluster, as per usual.
Monitoring did not export its own data; it has xpack.monitoring.disabled in elasticsearch.yml as per usual.

Up until now, in ESv2, we ran an instance of Kibana pointed at the monitoring cluster:
kibana.yml had elasticsearch.url set to monitoring cluster.

This would source data for the monitoring correctly.

Then in Sense, we would edit the "Server" exposure of endpoint, to point at the production cluster.

So: we were both monitoring and able to interact via Console with out production cluster.

This was I believe the standard way to use Kibana/Sense/Console when using a monitoring cluster.

Today my problemis that with ES/Kibana v5 this pattern is broken (as far as I can tell).

My ES v5 clusters are working correctly.
My production cluster is exporting data to my monitoring cluster.
My monitoring cluster has local export disabled.
Kibana Monitoring is sourcing data (about the production cluster) correctly from the monitoring cluster.

But because the DevTools version of Console removed the exposure of "Server", it is no longer possible to use DevTools console for any management of the production cluster.

This is because Kibana is itself sourcing from the monitoring cluster. And because the editable "Server" filed in the UI is removed, DevTools console only allows interaction with that cluster: the monitoring one.

The problem is that while the Monitoring plugin must source data from the monitoring cluster, the DevTools console should be able to interrogate a different endpoint: the production cluster.

I have read through the public documentation, and also developer discussion, of the removal of the "Server" field in the UI. I have read about the new "xpack.monitoring.elasticsearch.url" value in kibana.yml. I have tried every combination I can think of, to no avail.

The solution in the previous query like this, cited above, is no good for anything but queries.

It might (maybe) allow you to jury-rig access to query your cluster, but we have always used Console for all our administrative interrogation of the cluster - e.g. to check and change settings when performing recovery, ops tasks, rolling upgrades, etc etc etc.

This tool has been a godsend, I really want to be able to continue using it.

The only workaround I have figured out so far is to run two instances of Kibana just so I can configure one for monitoring, and the other for DevTools console activity. This is comical and a profound regression.

This is a real shame since I am overjoyed to see the return in Monitoring in 5x of a lot of the secondary analytics and nuance lost in the 1x to 2x refactor.

I am hoping I am just overlooking some magic combination...!

...but is it really no longer possible to use Kibana monitoring/administration with a monitoring cluster setup?

AFAI knew this was a very standard production deployment, I just can't believe something so fundamental is broken... :confused:

All hints and pointers most welcome!


Hi Aaron,

I think you have things switched around from the normal configuration. Normally people would set their elasticsearch.url to their production cluster (and that's where your .kibana index will be), and then set xpack.monitoring.elasticsearch.url to point to their monitoring cluster.

With these settings, monitoring should work. And your Kibana Dev Tools Console would also access the production cluster (only that cluster). And all your Kibana index patterns would be for data on the production cluster.

There is also another (slightly harder) way you could configure things. You could keep your Kibana pointing to your monitoring cluster (and it would have .kibana on it), and you could use cross cluster search to get to all the indices on your production cluster (including from the Console).
Here's some info on it;


Hi Lee,

Perils of posting late on a Friday,

I forgot to mention, I had tried that first, after reading about the introduction of the new configuration value xpack.monitoring.elasticsearch.url.

In Kibana 5.6.3, our result on first attempt was that Kibana receives data from two clusters, both production and monitoring...

...but found with our basic license, we are prohibited either from using monitoring with more than one cluster; and from specifying which cluster is to be monitored. (We're a non-profit and buying a license is not really in scope. :frowning: ) . In this case, the data source defaulted to the monitoring cluster itself (rather than its store of data from the production cluster) and this was immutable via the UI.

That might have been because on startup, I had not yet disabled xpack.monitoring in the elasticsearch.yml of the monitoring cluster nodes.

I had noticed and fixed that, and restarted the entire ES cluster, and Kibana... but it did not rectify the problem...

But I just re-tried this on your advice, after wiping all the indices on the monitoring cluster, and restarting Kibana again, and now it seems to be working with that configuration. Yay!

(Possible tangentially related, Kibana did not detect when our (basic) license was posted to the monitoring cluster. I got the ack from the license acknowledgement step, but this was never reflected in Kibana's claim that we had a trial month long license.)

It is a huge relief that this setup is still possible!

Thanks for the sanity check!!!


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.