ECE RAM to Storage Ratio

I was looking at how to change the RAM to storage ratio in ECE, I found a couple of threads asking the same question that had links to and but these are both missing now. Is this something that can be configured? Can a different ratio also be configured for dedicated Master nodes? The VMs that we can provision don't fit with the 1:32 ratio.

Hi @adesouza

In 1.0 there is unfortunately no way of globally configuring the ECE RAM to storage ratio - this will be supported in a forthcoming minor release version (but probably not the next one)

Currently if you want to allocate a different one you have to do it per cluster by setting the overrides.quota.fs_multiplier parameter via the advanced config page

(or using the raw metadata API)



Master nodes have no quota at all since they don't store data

1 Like

In the ECE UI my master node is showing as having 32GB of disk space assigned.

@adesouza ha fair point :slight_smile: I glossed over the details

Master nodes have a quota and are treated identically to data nodes in this regards, BUT:

  • They can't actually store data so they never use it up
  • the data storage volume is shared between all the clusters, so if you don't actually use your quota it does not impact the actual disk usage of the host

eg if you had a 100GB disk, you could have eg 20 masters all claiming to use 32GB and the disk usage would just be 20x the container overhead

Currently if you want to allocate a different one you have to do it per cluster by setting the overrides.quota.fs_multiplier parameter via the advanced config page

Thanks very much for this information! Maybe I missed it in the documentation, but is there a place I can find other overrides, and their type definitions? I am currently testing out this parameter with a integer type, but this is a guess without the documentation.

Thanks in advance!

1 Like

@Alex_Piggott Great, thanks for clearing that up.
With regards to the overrides.quota.fs_multiplierparameter, where exactly in the json (plan or data?) do I need to add this?

@IanGabes - apologies, currently they are undocumented internal fields that we mention only when we need to help people work around limitations in 1.0

With each minor release we are adding curated and documented models to the API for the settings that are currently only exposed via this "advanced" page

In the meantime do ask here and we'll dispense "tribal knowledge" on the fields with the proviso that they are not considered stable across version releases

The fs_multiplier field should be an integer and is mulitplied by the cluster capacity (in GB) to provide the disk quota - let me know if you need any more details on that

An example of its use (inserted at the top level of the "Data" field under config/advanced

  "overrides": {
    "quota": {
      "fs_multiplier": 35



To set the fs_multiplier, go to the cluster's manage page and near the bottom there is a field called "Advanced Cluster Configuration", with a clickable link:

Clicking on that takes you to a page with 2 JSON boxes. The lower of which is marked "Data". Insert the JSON mentioned in the previous post at the top level of that lower box, and hit the "Save" button right at the bottom of the screen:


It will instantly take effect, it's not like a plan change that goes through lots of steps to complete.

1 Like


The 2 other "override" fields that have proven useful so far are:

  "resources": {
    "cpu": {
      "hard_limit": true

If you set hard_limit to false then the cluster gets the entire CPU of the host instead of a sub-set determined by its size. Obviously this is not without risk to the overall platform stability

  "overrides": {
   "resources": { // (needed for 1.1+)
       "cpu": {
         "factor": 1.3

A safer way of boosting the CPU for a given cluster is by setting factor ... eg to double the CPU available change from 1.3 (the default) to 2.6

(at the same level as factor is processors, an integer which determines how many processors Elasticsearch believes its container has, so will change the thread pool defaults etc - we haven't found any case where this makes a noticeable change over the hand-curated-vs-cluster-size defaults)

1 Like

I should add that apart from hard_limit, these fields only get applied instantly from ECE 1.0.2 onwards (with 1.0.1- they have to be associated with a plan change). Needless to say, running with ECE 1.0.2 is recommended!

@Alex_Piggott I copied the fs_multiplierabove to the Data json of my cluster, but nothing changed - I was expecting the disk capacity to now show 35GB, but it's still showing 32GB. I'm using 1.0.2.

@adesouza ... ah yes sorry, one other thing i should have mentioned is that it doesn't get displayed in the overview :frowning: ... but it has been applied ... this was always a stopgap so it doesn't have great UI support, apologies

(note: your post has prompted lots of internal discussions about how to improve this for the not-too-far-off 1.1 release, since the full solution is now expected in 1.3 ie 2018)


There is an internal API call that appears to allow you to set the default ratio:

export HOST="http://address:12400"
export PASSWORD="password of root"

#(to login)
export AC_TOKEN=$(curl -s "$HOST/api/v0.1/login" -XPOST -d "{\"username\": \"root\", \"password\": \"$PASSWORD\"}" | jq -r '.token') && export AC_AUTH="Authorization: Bearer $AC_TOKEN" && echo $AC_AUTH

#to make the change, eg here I set it to 64:
curl -XPUT --header "$AC_AUTH" "$HOST/api/v0.1/regions/ece-region/node_types/elasticsearch/default" -d "{\"node_type_id\":\"default\",\"overrides\":{\"instance_data\":{\"overrides\":{\"quota\":{\"fs_multiplier\":64}}}}}}"
# {"node_type_id":"default","overrides":{"instance_data":{"overrides":{"quota":{"fs_multiplier":64}}}},"ok":true}

#To check run:
curl -XGET --header "$AC_AUTH" "$HOST/api/v0.1/regions/ece-region/node_types/elasticsearch"
# {"node_type_id":"default","overrides":{"instance_data":{"overrides":{"quota":{"fs_multiplier":64}}}},"ok":true}

Provided the cluster is provisioned with node_configuration_id: "default" (which should be the default), then that should pick up the new settings (which should be picked up via the UI)

Note this is an unsupported feature in 1.0 (though I just had a play and it does appear to work) and will likely change in 1.1 (we will likely add a supported equivalent in to replace it)!


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.