Hey Everyone,
We're trying to move from the legacy exporters over to Elastic Agent. Our data pipeline is Elastic Agent > Logstash > Kafka > Elastic
. I have a couple of questions and would appreciate any and all knowledge anyone is wishing to share.
- Will monitoring work if the underlying data stream names are changed
- If it is a big cluster with dedicated master, how, warm, cold and coordianting nodes should I use the
scope
option for monitoring - If the scope option is selected, how do you make the agent collecting the metrics HA (highly available)
To briefly explain the first question, we are using Kafka's Elasticsearch connectors which require a type
and dataset name
to be specified.
As such the datastream name in ES would be metrics-something-kibana.stack_monitoring.stats-prod
for example.
Regarding the second and third - What I concluded from the documentation is that you should use the scope
option and add a LB url that balances across nodes which are not master-eligible (in our case those would be coordinating-only nodes). But how do you make it so if the elastic-agent collecting the metrics goes offline the metrics still keep coming.
If you were to have two agents, both collecting the same data from the same cluster, would they duplicate the metric data or does every pipeline have a static way to generate the doc _id
field?
Sorry for the long post, here's a cookie for those of you who made it, and thanks for any help in advance!
Cheers,
Luka