Installing ELSER Model on Docker Instance Fails

I followed the official documentation to create a single-node 8.12.2 cluster. Trial license is activated. When I attempt to deploy the elser_model_2_linux-x86_64 model, I receive the following 429 error: Could not start deployment because no ML nodes with sufficient capacity were found

WSL is set to allow 8GB

I think the relevant node stats are up to the requirements but maybe I've missed something:

"roles": [
        "data",
        "data_cold",
        "data_content",
        "data_frozen",
        "data_hot",
        "data_warm",
        "ingest",
        "master",
        "ml",
        "remote_cluster_client",
        "transform"
      ],
      "attributes": {
        "ml.allocated_processors_double": "4.0",
        "ml.allocated_processors": "4",
        "ml.machine_memory": "5368709120",
        "transform.config_version": "10.0.0",
        "xpack.installed": "true",
        "ml.config_version": "12.0.0",
        "ml.max_jvm_size": "2684354560"
      }

Any suggestions? Thanks!

1 Like

Can you run

GET /_cluster/stats

And show

    "os": {
....
      "mem": {
        "total_in_bytes": 8326275072,
        "adjusted_total_in_bytes": 8326275072,
        "free_in_bytes": 247734272,
        "used_in_bytes": 8078540800,
        "free_percent": 3,
        "used_percent": 97
      }
    },

    "jvm": {
      "max_uptime_in_millis": 749367870,
      "versions": [
        {
          "version": "21.0.2",
          "vm_name": "OpenJDK 64-Bit Server VM",
          "vm_version": "21.0.2+13-58",
          "vm_vendor": "Oracle Corporation",
          "bundled_jdk": true,
          "using_bundled_jdk": true,
          "count": 1
        }
      ],
      "mem": {
        "heap_used_in_bytes": 1733976392,
        "heap_max_in_bytes": 4164943872
      },

and

Here you go:

"mem": {
        "total_in_bytes": 5368709120,
        "adjusted_total_in_bytes": 5368709120,
        "free_in_bytes": 1440641024,
        "used_in_bytes": 3928068096,
        "free_percent": 27,
        "used_percent": 73
      }

 "jvm": {
      "max_uptime_in_millis": 9577615,
      "versions": [
        {
          "version": "21.0.2",
          "vm_name": "OpenJDK 64-Bit Server VM",
          "vm_version": "21.0.2+13-58",
          "vm_vendor": "Oracle Corporation",
          "bundled_jdk": true,
          "using_bundled_jdk": true,
          "count": 1
        },
 "mem": {
        "heap_used_in_bytes": 171326576,
        "heap_max_in_bytes": 2684354560
      },

Oh and @Steve_Stefanovich Welcome to the community

So what yours says is

The total host Docker Container RAM is 5GB

Your Total JVM is 2.5GB

That is shared between Elasticsearch normal Data Stuff and the ML Node.

I think you are short on RAM

ELSER says it needs 2GB Off Heap...

I deployed Mine Single Node Docker with 8GB Available to the Container which results in 4GB HEAP

If you are running other stuff in your Docker ENV before you start elasticsearch... Elaticsearch will get the "leftovers" then JVM is 50% of that...

I will try with a Heap your size and see what happens

Thanks Stephen. I bumped WSL to 12GB then recreated the image using:

docker run -e ES_JAVA_OPTS="-Xms4g -Xmx4g" --name es01 --net elastic -p 9200:9200 -it -m 5GB docker.elastic.co/elasticsearch/elasticsearch:8.12.2

but I'm still facing the same issue. The node stats reflect the updated memory:

"jvm": {
      "max_uptime_in_millis": 308187,
      "versions": [
        {
          "version": "21.0.2",
          "vm_name": "OpenJDK 64-Bit Server VM",
          "vm_version": "21.0.2+13-58",
          "vm_vendor": "Oracle Corporation",
          "bundled_jdk": true,
          "using_bundled_jdk": true,
          "count": 1
        }
      ],
      "mem": {
        "heap_used_in_bytes": 931135488,
        "heap_max_in_bytes": 4294967296
      },
      "threads": 85
    },

Any other ides what I should be checking?

I clarified that 2GB for the model is off heap so it is not part of the JVM but they're still has to be room for it.

Is that saying that the whole container of memory is 5gb

If so, that's probably still not enough.

Can you show the OS like I showed above? Always show both OS and JVM.

Sorry, missed the "off heap" portion. I tried one more time, assigning 6GB to the container and 1GB to the JVM heap. It's still complaining.

"os": {
      "available_processors": 4,
      "allocated_processors": 4,
      "names": [
        {
          "name": "Linux",
          "count": 1
        }
      ],
      "pretty_names": [
        {
          "pretty_name": "Ubuntu 20.04.6 LTS",
          "count": 1
        }
      ],
      "architectures": [
        {
          "arch": "amd64",
          "count": 1
        }
      ],
      "mem": {
        "total_in_bytes": 6442450944,
        "adjusted_total_in_bytes": 6442450944,
        "free_in_bytes": 3937071104,
        "used_in_bytes": 2505379840,
        "free_percent": 61,
        "used_percent": 39
      }
    },
	
"jvm": {
      "max_uptime_in_millis": 205021,
      "versions": [
        {
          "version": "21.0.2",
          "vm_name": "OpenJDK 64-Bit Server VM",
          "vm_version": "21.0.2+13-58",
          "vm_vendor": "Oracle Corporation",
          "bundled_jdk": true,
          "using_bundled_jdk": true,
          "count": 1
        }
      ],
      "mem": {
        "heap_used_in_bytes": 651999304,
        "heap_max_in_bytes": 1073741824
      },
      "threads": 84
    },

can you try without the -m 5GB

That worked! Thank you for all the assistance..

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.