Best way to share data between two Elasticsearch nodes located in separate data centers

I have two Elasticsearch nodes that are in two different data centres with a large geographical separation. I would like to share or consolidate the data between these two nodes.

This is a lightweight deployment with filebeat instances sending data to logstash instances within the same data center which then stores the result in the single Elasticsearch node in that data center.

The options I'm looking at are:

  1. Connecting the two nodes to form an Elasticsearch cluster. Due the latency between the nodes in different data centers, I believe this option is advised against.

  2. Co-locating the two nodes in one data center. I am hesitant about this option as then filebeat would need to send the data from one data center to the other data center and the variability of ACKs from logstash could affect the processing rates.

  3. Using tribe nodes. From my understanding this would require two additional VMs. Since this is a lightweight deployment with only two Elasticsearch nodes I don't really like having to double the requirement, but if this is the proper solution it may be the best option.

What option would be best in this situation? Are there any options I may of missed?

You are correct in that option 1 is not supported as it generally tend to result in poor performance and cluster instability.

Option 2 is what I most commonly see in these types of situations, although you ideally want to have a 3 master eligible nodes in your cluster, even if one of the nodes is a small dedicated master node that does not hold data. Filebeat is good at buffering, and can simply stop reading files whenever there are issues. Unless you have a very aggressive rollover policy, Filebeat is able to handle connectivity issues without losing data. The transfer from Filebeat to a remote Logstash instance is done in batches and can be encrypted as well as compressed, which is often desired when transferring data between data centres.

Depending on the scale and volumes, there are ways to extend and enhance this architecture, but this simple form can go quite far.

Thank you for your response and valuable insight. My concern in the Option 2 scenario is that the processing rate (events/s) will drop from the remote data center if there is variability in the latency between data centers. Would variability in the latency between data centers cause filebeat to throttle the rate it sends data as the rate logstash is processing data and sending ACKs will vary? Overall I would like to know if it is reasonable to expect that logs from both data centers could be processed at the same rate with the Option 2 architecture.

Using tribe nodes may be an alternative as it will reduce data transfer between sites at indexing time. Queries may however be slower, but this typically transfers less data. It does however require tribe node(s) and does result in two single-node clusters and no data replication, which can make it less durable.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.