first of all, congrats for Marvel release. Step in the right direction.
I have some questions: If I understand correctly, as of now Marvel has to
run in the cluster that it collects metrics from (as a plugin). Let's call
this cluster A. It is recommended for production env to have the Marvel
store the metrics to a different cluster. Let's call it B. Now, does it in
fact mean that in order to browse collected metrics (that are stored in B)
the client has to have an access to at least one node of A? (I assume this
is necessary to download the Kibana based web app?). Or let me put it this
way: in case of critical state of cluster A client has to know which nodes
of A are responsive and able to serve the web app prior to investigating
historical data stored in B? If yes, is there any plan to get just the web
app as a standalone package (like zip, war ...) so that client does not
have to rely on cluster A to serve it? Am I misunderstanding the concept?
Not sure I follow. The Marvel plugin has to be installed on all nodes in
cluster A and on nodes in cluster B (with the agent disabled). To view the
data you would open http://node_from_cluster_B:9200/_plugin/marvel , so if
something goes wrong with A, you still have access to the data.
Makes sense?
Cheers,
Boaz
On Thursday, January 30, 2014 8:27:32 AM UTC+1, Lukáš Vlček wrote:
Hi,
first of all, congrats for Marvel release. Step in the right direction.
I have some questions: If I understand correctly, as of now Marvel has to
run in the cluster that it collects metrics from (as a plugin). Let's call
this cluster A. It is recommended for production env to have the Marvel
store the metrics to a different cluster. Let's call it B. Now, does it in
fact mean that in order to browse collected metrics (that are stored in B)
the client has to have an access to at least one node of A? (I assume this
is necessary to download the Kibana based web app?). Or let me put it this
way: in case of critical state of cluster A client has to know which nodes
of A are responsive and able to serve the web app prior to investigating
historical data stored in B? If yes, is there any plan to get just the web
app as a standalone package (like zip, war ...) so that client does not
have to rely on cluster A to serve it? Am I misunderstanding the concept?
Ah, so the plugin needs to be installed on both clusters, I missed that
part. Thanks for clarification.
BTW, is it required to have the plugin on both sides because the plugin on
cluster B does more than just serving the web app in this case? Say, I want
to monitor cluster A and store data into cluster B - thus I install plugin
on cluster A and configure accordingly, but do not install the plugin on B,
instead I just keep copy of the web app locally in case I would want to
access the data in cluster B? Hope my question makes sense...
Not sure I follow. The Marvel plugin has to be installed on all nodes in
cluster A and on nodes in cluster B (with the agent disabled). To view the
data you would open http://node_from_cluster_B:9200/_plugin/marvel , so
if something goes wrong with A, you still have access to the data.
Makes sense?
Cheers,
Boaz
On Thursday, January 30, 2014 8:27:32 AM UTC+1, Lukáš Vlček wrote:
Hi,
first of all, congrats for Marvel release. Step in the right direction.
I have some questions: If I understand correctly, as of now Marvel has to
run in the cluster that it collects metrics from (as a plugin). Let's call
this cluster A. It is recommended for production env to have the Marvel
store the metrics to a different cluster. Let's call it B. Now, does it in
fact mean that in order to browse collected metrics (that are stored in B)
the client has to have an access to at least one node of A? (I assume this
is necessary to download the Kibana based web app?). Or let me put it this
way: in case of critical state of cluster A client has to know which nodes
of A are responsive and able to serve the web app prior to investigating
historical data stored in B? If yes, is there any plan to get just the web
app as a standalone package (like zip, war ...) so that client does not
have to rely on cluster A to serve it? Am I misunderstanding the concept?
My understanding is that the jar file that is part of the plugin is only
needed on cluster A, while the _site content is only needed on cluster B.
Without the jar file on cluster B, the agent does not need to be disabled
since there is no code to disable. Cluster B should be able to continue
ingesting content if Marvel is indeed using the standard API.
Ah, so the plugin needs to be installed on both clusters, I missed that
part. Thanks for clarification.
BTW, is it required to have the plugin on both sides because the plugin on
cluster B does more than just serving the web app in this case? Say, I want
to monitor cluster A and store data into cluster B - thus I install plugin
on cluster A and configure accordingly, but do not install the plugin on B,
instead I just keep copy of the web app locally in case I would want to
access the data in cluster B? Hope my question makes sense...
Not sure I follow. The Marvel plugin has to be installed on all nodes in
cluster A and on nodes in cluster B (with the agent disabled). To view the
data you would open http://node_from_cluster_B:9200/_plugin/marvel , so
if something goes wrong with A, you still have access to the data.
Makes sense?
Cheers,
Boaz
On Thursday, January 30, 2014 8:27:32 AM UTC+1, Lukáš Vlček wrote:
Hi,
first of all, congrats for Marvel release. Step in the right direction.
I have some questions: If I understand correctly, as of now Marvel has
to run in the cluster that it collects metrics from (as a plugin). Let's
call this cluster A. It is recommended for production env to have the
Marvel store the metrics to a different cluster. Let's call it B. Now, does
it in fact mean that in order to browse collected metrics (that are stored
in B) the client has to have an access to at least one node of A? (I assume
this is necessary to download the Kibana based web app?). Or let me put it
this way: in case of critical state of cluster A client has to know which
nodes of A are responsive and able to serve the web app prior to
investigating historical data stored in B? If yes, is there any plan to get
just the web app as a standalone package (like zip, war ...) so that client
does not have to rely on cluster A to serve it? Am I misunderstanding the
concept?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.