I would like to know the maximum tested write per second speed. I have a use case of 15 million documents events generated per second where each document size is approx 3.5kb.
as this depends a lot on your hardware (cpu power, number of cores, disk speed), your number of nodes, your degree of parallelization (number of shards, number of replicas), your mapping configuration (as this has an impact on size and indexing speed), there is no one stop formula to this.
A good way of testing performance is to start with a single shard and no replicas, start indexing and see when the searching or indexing does not hit your SLAs anymore (search latency, index speed) and then you have a number how much a single shard can handle.
If I calculate correctly that is about 50GB per second or a bit over 4PB per day. I think that sounds like far too much data for Elasticsearch given how much processing it does per document.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.