Best way to ship metricbeat to multiple servers?

Hi,

We have two kibana instances, which work indepently from each other. One is for customer access with limited data inside, and one for internal use with more detailed data.

We use metricbeat for getting data from cpu, mem, etc. and visualize it in kibana.
Now, I would like to ship the metrics from customer's elk server to customer's logstash AND our internal logstash.

What options do I have? It needs to work failsave, so it must not influence a system if the other is down.

In my mind are coming following options:

  • clone the init.d script and change service name, pid file, yml file, etc.

Are there better options?

Thanks, Andreas

I'm glad you're considering decoupling the 2 target systems from day one!

Some options coming to mind:

  1. run multiple metricbeat instances
  2. use metricbeat with file output and use multiple filebeat instances per target (writting to disk gets you some buffering)
  3. point metricbeat to kafka. The kafka model with multiple (named) consumer groups gets you some buffering (with support for replaying). Kafke implicitly gets you some decoupling between the different logstash instances, as each consumer-group subscribes to exactly same data independent of each other. What's nice about using kafka is, both logstash instances will receive exactly the same data (no data-loss if one instance is down).

Personally I'd use kafka, as it support replication, load-balancing, disk-based queue (buffering) and decoupled consumers-groups.

thanks for the answer.

I am currently trying to use multiple metric beat instances. I've done it for filebeat before on windows, where I needed to give both instances different registry files.
But I found no metricbeat buffer file / registry in the config.

Is there any buffering in metricbeat? What happens, if logstash (target of metricbeat) is down and metricbeat is still running or either restarted during logtash unavailability? Is metricdata buffered somewhere? If so, where can I configure the buffer store?

Or do I really need to buffer externally (logfile / kafka / whatever)

libbeat buffers events in memory (configurable via queue_size). Once queues are full, it waits querying for new stats until some more space is available in the buffer.

We're considering some enhancements to the publisher pipeline in libbeat (including disk-based persistent queues). e.g. this ticket: https://github.com/elastic/beats/issues/575

Without disk-based queue, you can either have metricbeat write events to file (file output) or kafka for your buffering needs.

This topic was automatically closed after 21 days. New replies are no longer allowed.