Marvel monitoring data not showing up on separate marvel cluster

Hi everyone.
I have the following situation.

I'm running a elasticsearch cluster with the marvel agent instaled which is configured to send it's monitoring data to a separate cluster. As far as i can observe with Kibana, the data is received by the separate cluster and stored in the .marvel-el-* indices as expected. But the marvel app tells me:

"Waiting for Marvel Data
It appears that we have not received any data for this cluster. Marvel data is sent every 10 seconds by the agent plugin; once the data appears this page will automatically redirect to the overview page."

I enabled the marvel agent on the marvel cluster just to make sure there is nothing wrong with the marvel app itself and the monitoring data from the marvel cluster shows up without a problem. But still nothing from my other cluster.

Both clusters run with trial licenses (different ones as far as i can see with different expiration dates).

Versions used:
elasticsearch 2.2.0
marvel 2.2.0
NO shield

Please let me know if you need additional information.

Thanks in advance.

Heya Gerd,

Are your two clusters both running on servers that sync to the same time source?

Did you recently update to 2.2.0, or is this a fresh install?


Hi Steve.
Both clusters are using the same NTP servers and are in sync to the second.
Both clusters are fresh installes with no upgrade history.


Hi Gerd,

Can you delete the .marvel indexes on the monitoring cluster, then run the following request to ensure that it's actually receiving data (looking for counts here rather than pure existence):

GET /_cat/indices/.marvel*?v

Also, can you run that same command on the monitored (prod) cluster? If it's getting data, then it's recording it locally.

Finally, can you show the Marvel exporter configuration that you're using on the prod cluster?


Hi Chris.
I executed the GET and found the problem myself.

health status index pri rep docs.count docs.deleted store.size
yellow open .marvel-es-2016.03.17 1 1 5213 0 612.4kb 612.4kb
yellow open .marvel-es-2016.03.20 1 1 8614 0 944.1kb 944.1kb
yellow open .marvel-es-2016.03.18 1 1 8613 0 1mb 1mb
yellow open .marvel-es-2016.03.21 1 1 25742 0 3mb 3mb
yellow open .marvel-es-2016.03.19 1 1 8611 0 1021.4kb 1021.4kb

The .marve-es-data index was missing. It seems that this index stores crucial information to interpret the data in the .marve-es-* indices.

I double checked all the hosts in the production cluster and found one master node that was not restarted after changing the marvel exporter settings. After restarting that master node monitoring data showed up.

Sorry i bothered everyone with such a dull mistake.

What i (think i) learned: it seems only the master nodes are providing that tiny bit of additional information to interpret all this monitoring data in the .marvel-es-* indices.


That's true: the plugin is waiting for the master node to show up before giving access to the interface. I'm glad you resolved your issue.

1 Like

This helped me as well... we were moving our marvel monitoring off of our prod cluster, to a separate monitoring cluster. We had deleted the old .marvel* indices and the .marvel-es-data-1 was not recreated until all of the nodes in our prod cluster were sending their .marvel* indices to the new monitoring cluster.