Hi, I am working on elasticsearch and I would want to scale up to an ideal setup for a 2 node or a 4 node cluster. Can someone please inform me about what should be the idea CPU configuration for:
2 Nodes:
Both nodes would work as all roles (serve request and hold data).
The nodes would have 5 shards and 1 replica.
The nodes would be hosted as VM's on AWS
4 nodes:
In a dedicated hierarchical architecture: 1 Master node, 1 Coordinate only node and 2 data nodes.
What I want to know?
What should be a ideal CPU specs (RAM? Cores? Disk?) for each of my nodes for each of the above mentioned configuration.
How many requests per minute/second would my cluster be able to handle. For example what kind of an infra am I looking at if I have a throughput of 500, 1000 and 5000 requests in a second
I am using elasticsearch only for searching purposes. But it's quite an extensive search through
@warkolm I do understand about the minimum 3 master eligible nodes but as of this moment in scaling up i can only scale up to the two defined set ups. Would you be able to share the CPU specs for it? Like how much ram, cores should I go for to be able to set up a stable and fast searching engine.
I have roughly 10 million documents stored in 5 shards and 1 replica of each.
I am using elasticsearch for an autosuggest set up that means roughly 1000 requests would come in a second and the response times have to lower than 100ms preferably below 50ms. Currently on a single node I get a response time of around 86ms with 100 requests per second.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.