Elastic stack - how big can it really get?


Since we are preparing to start a new project based on ELK I would be glad if some of you could share the details of the largest ELK cluster you've seen or worked with (How many nodes? How many of them are data nodes? What type of hardware was required for every node type (CPU, RAM, disks)? How many GBs are ingested per day? How frequent and how extensive are searches on this cluster? How does the cluster perform?)

Thanks in advance,

Hey @Yuval_Khalifa, have you seen the videos and slides from Elasticon? Lot's of interesting information, stats and metrics about cluster sizes, numbers of documents, amount ingested per day ,etc. For example, FireEye has

  • 3.6 Petabytes of raw storage across 40 clusters
  • 700 billion indexed events in 400+ nodes
  • 300,000 events per second

The talk goes into detail about what infrastructure they are running this on.

1 Like