Elastic.co hosted instance and number of nodes

I am testing hosted instance and dont want to use replicas but i want to have multiple nodes for better performance in the cloud calculator https://www.elastic.co/cloud/elasticsearch-service/pricing i see per example 32GB/768GB if i dont choose 2 datacentes HA, how is the cluster size defined is just one node 32GB/768GB of 2 nodes of 16GB/384GB or any other configuration?

Elastic Cloud nodes run on i-series instances (if you chose the AWS version). You would be better off testing various sized clusters rather than assuming multiple nodes will be more beneficial.

However, my understanding is that we don't split things into multiple nodes until they get to the high end of the JVM heap size.

Just because you have nodes in 2 AZ does not necessarily mean to have to have a replica shard configured, although it will naturally not be highly available if you don't. What is it you are trying to achieve? What are the requirements?

I signed for 14 day trial and thay gives you 2 datacenters and 2 nodes with replication, so i disabled replicas and now have 2 nodes without replicas.

But now that i want to pay for a production subscription of 32GB/768GB and if i select that i dont know it it is just one node 32/768 or 2 nodes 16/384?

I ask because i always read every where that is better to have multiple nodes for better read/write performance and people is even using Kafka to improve index performance. So just wondering why in cloud instance it doesnt apply the same?

My use case is i have 15.000 hosts all around world that will be sending data to this cloud instance and indexing around 3 GB per day.

Scaling out into more nodes improves throughput, assuming you actually are adding resources. I would not expect two 16GB nodes to perform better than one 32GB node. It may actually be the opposite as there will be more network traffic that can slow things down.

OK i will go with just one node then! i see there is a new calculator now and get more confused as you can choose now machine learning nodes and also kibana memory from 1 to 8GB how much memory is enough for kibana?

Is there any guide on how much memory for kibana or ML node is needed depending on something?

For Kibana the default 1GB should be fine. For ML it depends on what you are going to do. I would recommend starting small and increase size if needed.

Thanks will start with minimum. The other question is about high availability as it duplicates the cost and i want to run production in just one node with no replicas although it warnings you can lose data. I guess it is quite difficult AWS lose your data completely and we have snapshots anyway of the data.

In order to have high availability you need multiple nodes and a replica configured. This also gives additional resilience and reduces the risk of losing data. Elastic Cloud takes a snapshot of your data every 30 minutes. This means that even if you use a single node you will not lose all data, but could lose any data that has not yet been snapshotted if there is a node failure and it is restarted somewhere else.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.