This is the same config on all my ES nodes I want to collet data on. After installing the license and agent on all the ES nodes I installed the plugin into Kibana. After some time I realized that I was collecting data on my monitoring cluster not my ES nodes. I opened elasticsearch.yml and added to the monitoring cluster:
marvel.enabled: false
Then I deleted all indices with .marvel-* This of course deleted the .marvel-es-data index. After I did this I went to kibana clicked on the Marvel plugin and noticed it said "Waiting for Marvel Data" and it never moves past this. I am collecting marvel data from my ES nodes, I see the indices being created and growing but nothing in the plugin. I assume that the error is because I deleted the .marvel-es-data index but I'm not sure.
Everything else seems to be working. On my monitoring cluster in the /var/lib/elasticsearch directory I see folders for both my monitoring cluster and my ES cluster. I know it sees the data but its not displaying.
How do I get that index back? I've tried restarting nodes, uninstalling the plugin on the monitoring cluster, kibana node, and ES nodes with no luck?
Just to be sure, this setting is on your monitoring cluster right? Not the one you want to monitor.
Also, you don't need to install the marvel-agent plugin in your monitoring cluster but only in the cluster you want to monitor.
What's the ouput of GET /_cat/shards?v on your monitoring cluster?
Do that mean that your monitored cluster and your monitoring cluster are installed on the same laptop? How did you install elasticsearch? Can you check that they do not share the same cluster name nor the same configuration files?
Yes the setting is on my monitoring cluster not the monitored.
Thank you for the clarification on the needed plugin, I wasn't sure if it was needed since the docs say to put that setting on your monitoring cluster.
The results of _cat/shards:
index shard prirep state docs store ip node
.kibana 0 r STARTED 2 15.5kb 10.1.55.22 HEALTH_NODE_2
.kibana 0 p STARTED 2 15.4kb 10.1.55.21 HEALTH_NODE_1
.marvel-es-2015.11.24 0 r STARTED 6847 1.8mb 10.1.55.22 HEALTH_NODE_2
.marvel-es-2015.11.24 0 p STARTED 6847 3.7mb 10.1.55.21 HEALTH_NODE_1
Lastly, No my monitoring and monitored cluster are not on the same machine. They are all separate VM's. I installed ES from rpm package and they have very different cluster names
I stopped all my ES nodes in my monitored cluster (to do some other maintenance) and when I restarted them I had marvel showing me data, but today when I cam back I am getting the above error.
EDIT***
Further research shows the following on my master nodes in my monitored cluster:
[2015-11-25 10:19:11,695][ERROR][marvel.agent.exporter.http] failed sending data to [http://10.1.55.22:9200/_bulk]: IOException[Error writing to server]
[2015-11-25 10:19:11,695][ERROR][marvel.agent ] [MASTER_NODE_1] background thread had an uncaught exception
java.lang.OutOfMemoryError: Java heap space
Currently my master nodes have 32G of ram with 24 set in /etc/sysconfig/elasticsearch:
ES_HEAP_SIZE=24g
I'm working on getting all nodes updated to 64 gig with 30 set to ES_HEAP_SIZE
Is it possible to upgrade to ES / Marvel 2.1 and Kibana 4.3? We made a number of improvements in the marvel agent that fix some bugs that could lead to high memory usage of the marvel agent.
I performed a rolling upgrade of my nodes from ES 2.0 - 2.1, Marvel 2.0 - 2.1 and Kibana 4.2 - 4.3. So far Marvel is working as expected.
I like the shard allocation section you guys added, very nice to see what is going on, and this history button is nice as well. Thanks for the help and nice work on the product
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.