Marvel Screen Not Showing Data

ES 2.0
Logstash 2.0
Kibana 4.2

I installed marvel 2.0 two days ago and it was working as expected until today. Now when I go to the marvel tab in Kibana I see the following:

and when I look at the index I see:

I know data is going into the index but I don't know why the marvel screen is not showing it. When I look at the index in kibana I can see plenty of data:

I am new to marvel so I don't know where to start to troubleshoot this issue. I know it is installed on all my nodes:

Any help would be greatly appreciated.

Thanks

Can you open you debugger in browser. Generally F12 or firebug on that page and see if you see any error.

I don't see anything that sticks out. Still the same situation. Data is there kibana sees it but the marvel tab is not showing anything

That seems weird though. Which elastic search server ur marvel config file is pointing to. I know it would be the proper one, just making sure you have the correct one.

I'm pointed to to a domain that consists of 2 search load balancers. I am just testing marvel for my use case and if it works good I will create a separate health cluster to send my beats and marvel info to. Right now it is in the same es cluster as my test data. That consists of 5 es nodes 2 masters 2 indexers 2 kibana nodes and 2 search load balancers (all vm's). I have marvel workers installed on all my nodes with the exception of the kibana nodes where I have the marvel plugin for kibana.

My kibana nodes point to the domain of the load balancers, so I assume that is where marvel is looking for its data. The strange thing is u see data in kibana in the marvel index but marvel is not seeing it. And strangest of all is it worked for a few days then stopped.

Ok so I have done some trouble shooting and maybe this will help find why I'm not getting marvel.

Right now I am testing marvel so it is not on a dedicated cluster. I have marvel just being indexed in a cluster that has other test data in it. the config is marvel agent installed on 4 elasticsearch nodes (data nodes) 2 elastic search nodes (search balancers) and the marvel plugin installed on one kibana node (no elasticsearch installed). On the kibana node in my kibana.yml I have it pointed at my 2 search load balancers. This works great for kibana to search the indices but for some reason not for marvel.

The dosc are not that clear, but do I need to be running elasticsearch on my kibana nodes in order for marvel to work correctly?

Also, as stated above I have amrvel indices in my cluster and Kibana can see them

I just don't know why the marvel page is blank

You shouldn't have to have Elasticsearch installed on the Kibana node for Marvel to work. If Kibana is set up correctly to to see the data, Marvel will inherit that setup from Kibana.

Do you remember if you happened to create any new mappings or templates around the time that Marvel stopped showing information that might be overriding the Marvel settings?

Also, do you have any custom configuration in the "marvel.agent" section of the elasticsearch.yml files in your nodes that you can share?

Finally, if you go to the '/marvel/api/v1/clusters' URL path after your Kibana host name in the browser (e.g http://localhost:5601/marvel/api/v1/clusters), and format the JSON response into a readable view, does that reveal any information that you can share, or might signal to you what the issue is?

I will look into the info and get back Monday. I do believe I uploaded a
topbeats template around that time so maybe that is the issue.

If I did override the marvel template what should I do to get it back in
order.

I do have one custom index template and uploaded the template for topbeats
other thank that I haven't touched the index templates.
tsullivan http://discuss.elastic.co/users/tsullivan Tim Sullivan
http://discuss.elastic.co/users/tsullivan
November 15

You shouldn't have to have Elasticsearch installed on the Kibana node for
Marvel to work. If Kibana is set up correctly to to see the data, Marvel
will inherit that setup from Kibana.

Do you remember if you happened to create any new mappings or templates
around the time that Marvel stopped showing information that might be
overriding the Marvel settings?

Also, do you have any custom configuration in the "marvel.agent" section of
the elasticsearch.yml files in your nodes that you can share?

Finally, if you go to the '/marvel/api/v1/clusters' URL path after your
Kibana host name in the browser (e.g
http://localhost:5601/marvel/api/v1/clusters), and format the JSON response
into a readable view, does that reveal any information that you can share,
or might signal to you what the issue is?

To respond, reply to this email or visit
Marvel Screen Not Showing Data in your
browser.

When I go to "http://localhost:5601/marvel/api/v1/clusters" I see the following:

When I look at my index templates I see one for ".marvel-es"

{
  ".marvel-es" : {
    "order" : 0,
    "template" : ".marvel-es-*",
    "settings" : {
      "index" : {
        "codec" : "best_compression",
        "mapper" : {
          "dynamic" : "false"
        },
        "number_of_shards" : "1",
        "number_of_replicas" : "1",
        "marvel_version" : "2.0.0"
      }
    },

When I look at my indices I see:

How can I tell if it is an index issue from here. I did absolutely upload a new index pattern for topbeats around the time all this started to happen

Are you seeing any errors in the Kibana logs? If there are errors it should be printing a stack trace, can you provide that? My best guess is there is some kind of error happening on the server.

Where does kibana log. I looked at the conf file and it says stdout. There is noting in /var/log for kibana.

Yeah... it should be logging to stdout by default. You can set the logging.dest to a file path if you want to log to a specific file.

When I output the logs to a specific place I see this

root@OP-01-VM-723:~# cat /var/log/kibana/kibana.log
{"type":"log","@timestamp":"2015-11-16T17:36:33+00:00","tags":["status","plugin:kibana","info"],"pid":6425,"name":"plugin:kibana","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2015-11-16T17:36:33+00:00","tags":["status","plugin:elasticsearch","info"],"pid":6425,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2015-11-16T17:36:33+00:00","tags":["status","plugin:marvel","info"],"pid":6425,"name":"plugin:marvel","state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2015-11-16T17:36:33+00:00","tags":["status","plugin:kbn_vislib_vis_types","info"],"pid":6425,"name":"plugin:kbn_vislib_vis_types","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2015-11-16T17:36:33+00:00","tags":["status","plugin:markdown_vis","info"],"pid":6425,"name":"plugin:markdown_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2015-11-16T17:36:34+00:00","tags":["status","plugin:metric_vis","info"],"pid":6425,"name":"plugin:metric_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2015-11-16T17:36:34+00:00","tags":["status","plugin:spyModes","info"],"pid":6425,"name":"plugin:spyModes","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2015-11-16T17:36:34+00:00","tags":["status","plugin:statusPage","info"],"pid":6425,"name":"plugin:statusPage","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2015-11-16T17:36:34+00:00","tags":["status","plugin:table_vis","info"],"pid":6425,"name":"plugin:table_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2015-11-16T17:36:34+00:00","tags":["listening","info"],"pid":6425,"message":"Server running at http://10.1.72.3:5601"}
{"type":"log","@timestamp":"2015-11-16T17:36:34+00:00","tags":["status","plugin:elasticsearch","info"],"pid":6425,"name":"plugin:elasticsearch","state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2015-11-16T17:36:34+00:00","tags":["status","plugin:marvel","info"],"pid":6425,"name":"plugin:marvel","state":"green","message":"Status changed from yellow to green - Marvel index ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

and when I access the marvel dashboard I see:

{"type":"response","@timestamp":"2015-11-16T17:41:01+00:00","tags":[],"pid":6425,"method":"get","statusCode":200,"req":{"url":"/bundles/commons.bundle.js","method":"get","headers":{"host":"kibana.csp:5601","connection":"keep-alive","accept":"/","user-agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36","referer":"http://kibana.csp:5601/app/marvel","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8","cookie":"_ga=GA1.2.654757858.1446664786","if-modified-since":"Fri, 13 Nov 2015 18:54:34 GMT"},"remoteAddress":"192.168.10.113","userAgent":"192.168.10.113","referer":"http://kibana.csp:5601/app/marvel"},"res":{"statusCode":200,"responseTime":1123,"contentLength":9},"message":"GET /bundles/commons.bundle.js 200 1123ms - 9.0B"}
{"type":"response","@timestamp":"2015-11-16T17:41:02+00:00","tags":[],"pid":6425,"method":"get","statusCode":200,"req":{"url":"/bundles/marvel.bundle.js","method":"get","headers":{"host":"kibana.csp:5601","connection":"keep-alive","accept":"/","user-agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36","referer":"http://kibana.csp:5601/app/marvel","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8","cookie":"_ga=GA1.2.654757858.1446664786","if-modified-since":"Fri, 13 Nov 2015 18:54:34 GMT"},"remoteAddress":"192.168.10.113","userAgent":"192.168.10.113","referer":"http://kibana.csp:5601/app/marvel"},"res":{"statusCode":200,"responseTime":1014,"contentLength":9},"message":"GET /bundles/marvel.bundle.js 200 1014ms - 9.0B"}
{"type":"response","@timestamp":"2015-11-16T17:41:04+00:00","tags":[],"pid":6425,"method":"post","statusCode":200,"req":{"url":"/elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1447695669490","method":"post","headers":{"host":"kibana.csp:5601","connection":"keep-alive","content-length":"62","accept":"application/json, text/plain, /","origin":"http://kibana.csp:5601","user-agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36","content-type":"application/json;charset=UTF-8","referer":"http://kibana.csp:5601/app/marvel","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.8","cookie":"_ga=GA1.2.654757858.1446664786"},"remoteAddress":"192.168.10.113","userAgent":"192.168.10.113","referer":"http://kibana.csp:5601/app/marvel"},"res":{"statusCode":200,"responseTime":35,"contentLength":9},"message":"POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1447695669490 200 35ms - 9.0B"}
{"type":"response","@timestamp":"2015-11-16T17:41:04+00:00","tags":[],"pid":6425,"method":"get","statusCode":200,"req":{"url":"/bundles/node_modules/font-awesome/fonts/fontawesome-webfont.woff2","method":"get","headers":{"host":"kibana.csp:5601","connection":"keep-alive","cache-control":"max-age=0","origin":"http://kibana.csp:5601","if-none-match":""574ea2698c03ae9477db2ea3baf460ee32f1a7ea"","if-modified-since":"Fri, 13 Nov 2015 18:54:34 GMT","user-agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36","accept":"/","referer":"http://kibana.csp:5601/app/marvel","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8","cookie":"_ga=GA1.2.654757858.1446664786"},"remoteAddress":"192.168.10.113","userAgent":"192.168.10.113","referer":"http://kibana.csp:5601/app/marvel"},"res":{"statusCode":200,"responseTime":81,"contentLength":9},"message":"GET /bundles/node_modules/font-awesome/fonts/fontawesome-webfont.woff2 200 81ms - 9.0B"}
{"type":"response","@timestamp":"2015-11-16T17:41:04+00:00","tags":[],"pid":6425,"method":"get","statusCode":200,"req":{"url":"/marvel/api/v1/clusters","method":"get","headers":{"host":"kibana.csp:5601","connection":"keep-alive","accept":"application/json, text/plain, /","user-agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36","referer":"http://kibana.csp:5601/app/marvel","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8","cookie":"_ga=GA1.2.654757858.1446664786"},"remoteAddress":"192.168.10.113","userAgent":"192.168.10.113","referer":"http://kibana.csp:5601/app/marvel"},"res":{"statusCode":200,"responseTime":232,"contentLength":9},"message":"GET /marvel/api/v1/clusters 200 232ms - 9.0B"}

I think your indices look fine, and I don't see any red flags in your log data.

Looks like your requests are in UTC time. You might want to double-check that the indexed marvel data has timestamps that would be covered under the times that logged requests are making. If the requests are being asking for "future data," then you could get the problem you are describing.

Another thing to check, did you happen make any changes to the client's system clock around when the problem started?

First off....THANK YOU to those who contributed to this thread. I appreciate your time and suggestions and help.

My issue was much more fundamental than we were looking. My cluster was stuck in red.

A simple curl /_cluster/health?pretty showed red status, then /_cat/indices and searching for the RED ones and mitigating that worked and now I see marvel again.

Thanks again.

Hello I am having exact same problem. But my separate monitoring cluster is green, http indexers are forwarding data to it. I have Kibana displaying the index data, but Marvel doesn't. Both my monitor cluster, kibna hosts have UTC time, I even go back a day, but I still see nothing rendered by marvel.

Here is what I see in my Kibana log

{"type":"response","@timestamp":"2016-11-10T23:07:20+00:00","tags":[],"pid":17733,"method":"get","statusCode":304,"req":{"url":"/bundles/src/ui/public/images/elk.ico","method":"get","headers":{"host":"herd-kibana1.blah.com:5602","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:49.0) Gecko/20100101 Firefox/49.0","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8","accept-language":"en-US,en;q=0.5","accept-encoding":"gzip, deflate, br","connection":"keep-alive","if-modified-since":"Thu, 10 Nov 2016 22:47:47 GMT","if-none-match":""6e2a38f39043263e2b4385f796027d2318c3991c-gzip""},"remoteAddress":"","userAgent":""},"res":{"statusCode":304,"responseTime":4,"contentLength":9},"message":"GET /bundles/src/ui/public/images/elk.ico 304 4ms - 9.0B"}
{"type":"response","@timestamp":"2016-11-10T23:07:20+00:00","tags":[],"pid":17733,"method":"post","statusCode":200,"req":{"url":"/elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1478819239788","method":"post","headers":{"host":"herd-kibana1.blah.com:5602","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:49.0) Gecko/20100101 Firefox/49.0","accept":"application/json, text/plain, /","accept-language":"en-US,en;q=0.5","accept-encoding":"gzip, deflate, br","content-type":"application/json;charset=utf-8","kbn-version":"4.5.4","referer":"https://herd-kibana1.blah:5602/app/marvel","content-length":"70","dnt":"1","connection":"keep-alive"},"remoteAddress":"","userAgent":"","referer":"https://herd-kibana1.blah:5602/app/marvel"},"res":{"statusCode":200,"responseTime":13,"contentLength":9},"message":"POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1478819239788 200 13ms - 9.0B"}

Here is the data in the cluster

curl -XGET http://herd-monitor1.blah:9201/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open .marvel-es-1-2016.11.11 1 1 79 0 268.3kb 125.5kb
green open .kibana-monitor 1 1 3 0 64.1kb 32kb
green open .marvel-es-1-2016.11.10 1 1 7487 0 1.9mb 1001.7kb

I am not sure if people have ran into this problem, but here it goes.

All my nodes in the cluster had marvel-agent that was sending data to monitoring cluster, but my master nodes were missing this plugin. The minute I installed and configured them to send data to remote cluster, Marvel started showing data.

1 Like