Recommendations for LLM nodes sizing

We’re about to implement our first LLM node (on Kubernetes) but haven’t been able to find current recommendations when it comes to sizing it. Are there any general rules we can apply based on number of events, log size etc. or do you have any guidelines that we can get hold of?

What do you mean with LLM node?

This is not a node role in Elasticsearch.

Are you talking about a Machine Learning node?

Err yeah, my bad - ML node, too late in the day and abbreviation mixup..

The sizing will depend more on the jobs that you are going to run, but the recommendation seems to have a 64 GB node for ML per this post.

Just to confirm, you have a Platinum or Enterprise license, right?

Thanks @leandrojmp - had seen that post earlier but as it’s more than 8 years old, I assumed it was outdated but if that’s still valid, then that’s what we’ll use. Oh and yes, we’re in the process of upgrading our license from Basic to Enterprise so we’re good to go - at least from a licensing point.

The memory usage will depend on which jobs you have and your data, it is not something that you can easily estimate.

You may start with a smaller node, something like 16GB and increase the memory if you have any issues.

Thanks we’ll start small and grow for memory, 16 GB sounds like a good starting point. Are there anything similar for cpu/cores besides the “We recommend that ML nodes have at least 4 Cores and 64GB RAM” from the article? It’s just what we can indicate to our colleagues to let them know what to expect..