Need some help on implementing Metricbeat on Production environment. I have tested the working locally (ELK stack on Laptop) and it good for collecting System level and Application level metrics. Now we're planning to implement this setup on our Production infra (~200 heavy servers) for collecting system metrics and application (includes PostgreSQL, Cassandra, HAproxt, Tomcat etc).
We have 5 node ES cluster in our RELK stack. Now we sending system logs to Logstash, then Redis as buffering platform. The ES cluster is in different network, so we are not planning to send Metricbeat output to ES cluster directly.
Please suggest the best way / practices to achieve this. Following are the options I found:
Set Metricbeat output to ES (Not preferring, as it's in different subnet)
Set Metricbeat output to Logstash.
Set Metricbeat output to Redis.
Any way to set the Metricbeat itself to collect data from Metricbeat in agent servers? Like some input mechanism?
I also tested the Metricbeat "internal queue" mechanism (queue.spool) with ES as output. Will this work for all other output type (like Logstash, Redis etc)?
If possible I'd suggest setting ES as output. If the only problem with this option is the network separation there is a solution. You can add coordinator nodes to the networks where metricbeat is running and make these nodes the only ones that have access to the network where your Elastic Stack is running. This way you don't need to add additional elements, and it is also easier to setup metricbeat when it sends events directly to Elasticsearch.
Logstash, and Redis (or Kafka) can also be valid options, but I'd only use them if you have some other requirements or more complex architectures.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.