Why do we have 64 GB limit for per node in cluster ?
Where are you seeing this limit?
This limit definitely exists, and is enforced even when licensed. This started somewhere 1 a 2 years ago. We had to remove physical ram from existing physical nodes to comply with licensing....
Elastic talks about "open limitless xdr", but at the same time seems to impose more and more limits and license restrictions.
OK, so you are referring to a commercial limit based on how Elastic charges for nodes and clusters and not a technical one?
Not 100% sure what @Kamran_Ahmadzade meant, but I'm indeed referring to a commercial 64GB limit (which was not there a few years ago).
I will leave it to someone from Elastic to comment on this as it is about commercials and licenses.
In this blog u can see the limit for per node.
Best practice has long been to keep the heap size below 32GB to benefit from compressed pointers and assign 50% of the available memory available to Elasticsearch to the heap. Based on this best practice the maximum node size is 64GB RAM. Elastic Cloud, which the blog post refers to, follows best practices and there I believe the max node size is indeed 64GB. This is a convention followed by Elastic Cloud and not a technical limitation. If you host your own cluster you can have nodes with more RAM if you want to.
Not when you buy a license. Elastic will ask you to reduce ram to 64 gb to be license compliant.
Also you are correct that 32 gb heap is max, but additional ram can help for os / caching and some types of aggregations etc. The fact is that when you host on premise and install on physical server you have to remove ram if you happen to have 128 gb for example. It doesnt really make us happy customers to be forced to throw away ram.....
I believe you can still have physical nodes with 128GB RAM, but you might need to pay for two nodes due to the 64GB limit in the license agreement. This is not a physical limit, just a formula for calculating cost which encourages certain behaviour.
This has been the practice... whether it is a "best" practice is arguable. Yes, keeping heap below 32GB is a good idea. However, the 50% ratio is nothing more than a guess without knowing the use-case. Extrapolating that to 64GB total is even more questionable.
Once you get to 64GB, memory is not even a significant limiting factor to the performance of a node, making the license limitation mentioned by @willemdh even more confusing. For most use-cases, additional cores has significantly more impact than adding RAM.
What makes the 64GB license limit frustrating is that buying enterprise RAM in 8GB DIMMs adds unnecessary procurement complexity, and results in more potential e-waste down the road. Buying small DIMMs is necessary due to the importance of populating all memory channels to get the full performance of the CPU. The latest EPYC processors w/12-channel memory make this even more ridiculous. Your choices, NEITHER OF WHICH IS GOOD, are:
- 12x4GB (if you can even find it) for 48GB total/24GB heap
- 12x8GB for 96GB total/31GB heap, and buy another ES license
In all of my testing the realistic max server size for Elasticsearch is no more than 32-cores and 128GB RAM. At this point the returns from additional resources diminish quickly. One of our lab servers has 64-cores, 256GB of RAM, and Intel Optane storage. The performance we can extract levels out at around 45% CPU utilization. Elasticsearch just doesn't seem to be able to scale vertically to take full advantage of the hardware. In all fairness, that wasn't really how it was designed.
I say all of this to point out that limiting the license to a quantity of RAM when the realistic vertical-scale limits are already so close, just adds unnecessary organizational and operational barriers for customers trying to procure a commercial Elastic deployment. That is my irrelevant $.02
Thanks @rcowart for your detailed and imho not so irrelevant $0.2. Totally agree with what you are saying.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.