Elastic Stack Architecture Upgrade

I have been tasked by my director with coming up with a 'right size' for our corporate Elastic Stack and I have the following question.

We are using AWS Cloud and we're growing but we're not what you would call an extremely large installation. We ingest on average about 36,000,000 documents daily and our index sizes are running about 40 GB - 50 GB daily.

Currently we have everything with the exception of remote filebeat running on a single r5.4xlarge instance so we don't have any snapshotting happening no warm-cold phase going. So far we've been extremely lucky in that we haven't lost that server.

What I'm wanting to do is move to:

  • 1 Coordinating Only node on the same server as Kibana
  • Master node with one being a tiebreaker
  • 2 nodes handling ingest & hot data responsibilities
  • 2 nodes handling data_warm responsibilities
  • one node handling data_cold responsibilities

What I've got so far is:

  • Master Nodes: r5.xlarge
  • Ingest / data_hot Nodes: r5d.4xlarge
  • data_warm Node: r5.2xlarge

I was thinking of using r5.xlarge for both the Coordinating Only node as well as the data_cold node.

Does this approach make sense?

P.S. I'm not sure if this is the right forum for this so please bear with me. I've got a good idea of what our architecture should look like just looking for some feedback.


You're suggesting a much bigger cluster than your current setup. Are you sure you need anything so complex? If you're currently doing fine on a single r5.4xlarge, but you just want to make it resilient, then read these docs and then consider whether it would be enough to use two nodes (probably both r5.4xlarge since that's what you currently have) plus a small dedicated tiebreaker.

But definitely start taking regular snapshots right away, that's very very important.

Alternatively, why not go for a managed service so you don't have to worry about this kind of thing?

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.