How to modify total memory of existing nodes?

Hi,

I am new to elasticresearch. I have set up 3 node cluster following docker setup procedure explained as per section: Section Start a multi-node cluster with Docker Compose in below help link

Here is my cluster node infromation.
n id ramMax hm hp diskTotal heapCurrent
es03 Fozz 3.7gb 1.8gb 43 442.7gb 843.9mb
es01 fMbk 3.7gb 1.8gb 77 442.7gb 1.4gb
es02 TdIO 3.7gb 1.8gb 22 442.7gb 438.9mb

The default total memory for each node is ~ 3.7 GB, i want to increase this to 32GB for each node to deploy ML Pretrained models. How can i do this ? I am new to elastic please provide step-by-step instructions.

Regards
Santhosh M

First, you do not want to set 32 GB as that will go beyond the java compressed object pointers and actually be less efficient. I would suggest setting it to 28 GB.... That is a safe number.

The jvm memory size is set in the .env file

Download and save the following files in the project directory:

.env

To be clear, you will need all that memory available at startup or the nodes will fail.

Set the setting in the .env

Then
docker compose down
then
docker compose up -d
should work.

That would save the data and then create the new containers with the new memory size but still have the data / indices you already created because the volument mounts will not get destroyed.

Be careful :slight_smile:

Dear @stephenb

Thanks for the suggestions. Unfortunately there is no option to download the .env file that you wanted to share. May i please request you to upload this file again?

I already have an .env file as below.
What is the line that i need to add in crease the memory to 28GB ?

# Version of Elastic products
STACK_VERSION=8.11.1

# Set the cluster name
CLUSTER_NAME=docker-cluster

# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200

# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80

# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=4073741824

# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject

Looking closer you will set MEM_LIMIT to the size of the container RAM for the elasticsearch node. The MAX Best practice is 64GB, and then Elasticsearch will compute the optimmum JVM Heap size. That is good. So Set MEM_LIMIT to 64GB.

64GB = 64*1024^3 = 68719476736

MEM_LIMIT=68719476736

Warning if you are using the 3 node Docker Cluster you will need at least 196GB of RAM on that host plus extra from OS and Kibana etc....

Also that same setting is used for Kibana which is not needed so in the docker-compose.yml I would just set the Kibana mem_limit to say 4GB

Or create a new variable in the .env file like

KB_MEM_LIMIT= 4073741824

and in the kibana section of the compose

mem_limit: ${KB_MEM_LIMIT}

Hi Stephen Brown,

I could increase the total memory for each node by modifying the below line in .env file used to setup my docker container

Open .env file from /nfs/ ElasticData_1

Change below lines and save the file

Increase or decrease based on the available host memory (in bytes)

#MEM_LIMIT=4073741824 # working

MEM_LIMIT=64g

Shot down docker

$ docker-compose down # memory change not reflecting with docker-compose stop

Bring up Docker

$ docker-compose up -d # Not working with Stop & Start docker-compose start

Memory change not reflecting with docker-compose stop & Start docker-compose start . Need to shutdown and bring up as above.

Check the change by executing below command in Kibana Developer Console

GET /_cat/nodes?v=true&h=n,id,ramMax,hm,hp,diskTotal,heapCurrent

n id ramMax hm hp diskTotal heapCurrent

es03 Fozz 64gb 31gb 12 442.7gb 3.7gb

es01 fMbk 64gb 31gb 8 442.7gb 2.5gb

es02 TdIO 64gb 31gb 1 442.7gb 326.8mb

I have a follow up question.

The pre-trained model I deployed is size of 4.1GB, when I tried setting my total memory of each node 12GB (before setting 64GB, I tried with 12GB/node), it failed to deploy with error shown in below screenshot, why ?

I think the message is pretty self-explanatory. In the message, there was only 3.5GB free and the model needed 4GB

Yes, understand the error message. But when i allocated 12gb why only 3.5 GB is free. Where did remaining go ? Is there anyway to check split up of how 12gb is allocated to different purposes?

So when you allocate 12GB to a node 50% is allocated to JVM so that is only 6GB for the JVM.

Then elasticsearch uses the JVM memory for many functions you can read about what items consume memory

If you really want to know read this page in detail.

And

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.