Marvel not showing any data

monitoring

(Noémie) #1

Hello,

I have a problem with Marvel. I installed it on the same machine than Elasticsearch, Logstash and Kibana. I have a single instance of Elasticsearch running and the node is both master and data. But I have this message: "Waiting for Marvel Data. It appears that we have not received any data for this cluster. Marvel data is sent every 10 seconds by the agent plugin; once the data appears this page will automatically redirect to the overview page." when I access to the marvel pannel. Reading the previous posts, I understood that the marvel access was granted once the master node had shown up and I don't understand what's wrong with my set up.

Thanks in adavance for your help,
Noemie


#2

You need to install both license and marvel-agent, and restart elasticsearch after installation. This worked for me.


(Noémie) #3

Hello,
thank you for your answer. I've already tried that (but tried it again just to be sure) but I still have nothing... I work with the 2.1.0 versions of marvel, license, marvel-agent and elasticsearch. Maybe this would help...


(Steve Kearns) #4

Hi Noemie,

Can you verify that you have installed license and marvel-agent on all nodes in your cluster and they have all been restarted? If you are running on a single machine, can you please make sure you don't have a second instance of Elasticsearch running (e.g. after stopping ES, make a request to localhost:9200/ and localhost:9201/ to make sure there are no other instances running?)

Thanks,
Steve


(Noémie) #5

Hi,

I don't have another instance running and there is just a single node in my cluster. I stopped everything and restarted it. When I execute a curl with this url http://localhost:9200/_cluster/health?pretty I have this answer:
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 6,
"active_shards" : 6,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0

I really don't know what to do...


(Tanguy) #6

Hi Noemie,

Can you please share with us the result of GET /_cat/indices?v ? You should see the marvel indices .marvel-es-data and .marvel-es-YYY.MM.dd. If you don't see them, there might be an issue in your configuration.

Can you also check that GET /.marvel-es-data/cluster_info/_count returns 1 document? This doc is indexed by the master node and contains necessary data for the user interface to work.

Finally, you should check your logs. Everything that contains [marvel.agent.*] may be intersting. Feel free to share these logs with us if you want (please remove any sensitive information first).

-- Tanguy


(Noémie) #7

There is the result of the GET /_cat/indices?v :

health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana 1 1 3 0 25.9kb 25.9kb
yellow open wt_access-2016.03.21 5 1 104063 0 105.2mb 105.2mb

and the following error message when trying the GET /.marvel-es-data/cluster_info/_count :

{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".marvel-es-data","index":".marvel-es-data"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".marvel-es-data","index":".marvel-es-data"},"status":404}

which make sense since the index is not found in the first request.

I tried the find command though and the .marvel-es-data is found there: ./elasticsearch-2.1.0/data/identityStats/nodes/0/indices/ and ./elasticsearch-2.1.0/data/elasticsearch/nodes/0/indices/
There is nothing in the kibana and elasticsearch logfiles matching [marvel.agent.*]. It seems like it never starts... I've got this though:

[2016-03-22 11:32:49,710][INFO ][rest.suppressed ] /.marvel-es-data/cluster_info/_search Params: {index=.marvel-es-data, type=cluster_info}
[.marvel-es-data] IndexNotFoundException[no such index]

In the mean time, I uninstalled license, marvel-agent and marvel again but it didn't change a thing...


(Tanguy) #8

It looks like the plugin did not start.

Can you enable debug logging for marvel? You need to change the logging.yml file and add marvel: DEBUG like this, and then restart the node:

logger:
  # log action execution errors for easier debugging
  action: DEBUG
  ...  
  marvel: DEBUG

We fixed few bugs since the 2.1.0, if possible I also encourage you to move to the latest version (2.2.1).


(Noémie) #9

So, here is what I got once debu logging for marvel is enabled:

[DEBUG][marvel.agent.collector.cluster] [stats] starting collector [cluster-info-collector]
[DEBUG][marvel.agent.collector.indices] [stats] starting collector [indices-stats-collector]
[DEBUG][marvel.agent.collector.cluster] [stats] starting collector [cluster-stats-collector]
[DEBUG][marvel.agent.collector.shards] [stats] starting collector [shards-collector]
[DEBUG][marvel.agent.collector.node] [stats] starting collector [node-stats-collector]
[DEBUG][marvel.agent.collector.indices] [stats] starting collector [index-recovery-collector]
[DEBUG][marvel.agent.collector.cluster] [stats] starting collector [cluster-state-collector]
[DEBUG][marvel.agent.collector.indices] [stats] starting collector [index-stats-collector]
[DEBUG][marvel.agent.exporter.local] local exporter [default_local] - waiting until gateway has recovered from disk
[DEBUG][marvel.agent.exporter.local] local exporter [default_local] - currently installed marvel template version [2.1.0] is up-to-date
[DEBUG][marvel.agent.exporter.local] local exporter [default_local] - started!
[DEBUG][marvel.agent.exporter.local] local exporter [default_local] - currently installed marvel template version [2.1.0] is up-to-date

and regarding kibana, I've got a few "failed to delete temp file elasticsearch-2.1.0/data/identityStats/nodes/0/indices/.kibana/0/translog/translog-4479644442346135504.tlog (I don't know if it is relevant here)

thank you for your help,
Noémie


(system) #10