I'm new to Elasticsearch and started to work on a company that assigned me a taqsk to add more nodes in an existing environment.
We are using the integration between Elastic and Azure in our environment.
We currently have only one node:
I need to create more nodes, but I am not findind how to do it.
There is an option to create deployment, but I am not sure if it that is where I should go, if I create another deployment it will be separated or will be in the same cluster?
I had this adventure before and got some results here, some videos on youtube also explains how to do so but alot of them makes uses of SSL, which can be a bit hard to follow on the first take.
The most basic (but nothing secure) configuration I found that worked is the one below that was made using 2 computers in the same network.
I saw this page as well. I am able to create more nodes in my local env, but Azure is integrated with Elastic Cloud and that environment I am not finding any way to do the same.
For example, there is not a function or button that says: create a new node.
There some to create warm and froze instances, there also an option to add 'Coordinating instances' that I have no idea of the meaning...
But I cannot find the option to add more nodes.
I found the options to resize the actual node, to add more zones, to add more memory and also a option to add another deployment.
1 x 8GB x 2 Zones :
Note 2+ zone is often referred to high availability configuration and a Primary and Replica Shard will never be on the same node so it does suit that purpose, but you can also think of it as just a 2nd / additional node.. it will function just like and additional node it is just in a 2nd zone.
There are some discussion of the value of that, there is probably for some very specific use cases where that could be advantage but in general not as much
In general
3 x 60GB Nodes striped across 3 AZs is pretty much equivalent to 6 x 30GB nodes striped across the 3 AZs from a Compute(cpu), RAM, Disk, I/O perspective...
Thanks a lot for your reply!
In our use case we are trying to understand why the number of requests per second that ES is able to respond in a reasonable time doesn't seem to scale if we just use the next setting (usually means doubling CPU or memory). We were wondering if just adding more nodes could be the solution (i.e. scaling horizontally, not vertically).
Are there other settings we can tune with here?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.