ELK Stack on AWS multi-region?

My App servers are deployed across 2 AWS regions, and I want to analyze their logs.
I am planning architecture with high availability (1 region failure). So, planning to have:

  • Filbeat on each app server,
  • 2 Redis (1 in each region),
  • 2 Logstash Indexers (1 in each region),
  • Elasticsearch cluster spanning across 2 regions,
  • 2 Kibana (1 in each region).

Picture of architecture :

I have below questions:

  1. Can Filebeat setup on app servers in one region communicate with Redis node on another region? Is this setup same as setting up redis-ouput plugin? (providing hosts parameter with IP address of 2 Redis servers) or anything tricky about it.
  2. Can Logstash Indexer ingest events from Redis server on another region? (is this same as providing 2 Redis input plugins for 2 Redis servers)
  3. Can Logstash send events to Elasticsearch Data node on another region?
  4. Can ES cluster span across 2 regions like the picture above? If I set min.master.nodes=3 this can handle 1 region failures, without causing split-brain problem? Will this continue to serve well once the region comes back? (This is where I have concerns about).

Any help is appreciated.

Have a look at the following blog post: Clustering across multiple data centers

@Christian_Dahlqvist Thanks a lot for sharing the link. !!

I have a usecase where my app servers are in 2 regions (50% data in each region). But, if I use just 1 ES cluster (with Redis, Logstash, Kibana) in 1 region (say EAST), then will the filebeats installed on WEST servers be able to write data into Redis on EAST?

In case of Disaster recovery (region failure), I want to spin up a new cluster from snapshots onto S3. Do you have any benchmark on the time it takes to restore ?

Under scenario one, it was quoted:

Real-time Availability Across Geographic Regions

Here you would have your application code write to a replicated queuing system (e.g. Kafka, Redis, RabbitMQ) and have a process (e.g. Logstash) in each DC reading from the relevant queue and indexing documents into the local Elasticsearch cluster.

Instead of application code writing to replicated queuing systems, can I make filebeats write to replicated queuing systems? Like configuring 2 Redis output plugins on each of these fileabeats? (These 2 Redis servers will be in different regions).

Are all your inputs file-based? If that is the case you may not need Redis at all.


So, in this case, if I remove Redis, all I need to do is send them to Logstash Indexers directly from filebeats.
Can I provide 2 logstash indexers as outputs for filebeats without load balancing?

I mean I want to setup 1 logstash indexer in each region, right, so can I configure filebeats so that each log event is sent to both logstash indexers?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.