Hot-Warm nodes - Indexes replicated across nodes

Hi,
I have 2 data nodes, one as master (hot) and other as data (warm)

    authmattix x.x.x.  box_type          hot
    pukeko     x.x.x.x  box_type          warm

I have set all the indexes to number_of_replicas= 0 on the master

Then I joined the cluster with the warm data node, but the problem is that all the indexes are replicated to my warm node, thus taking up space in the hard drive. Is this the expected behaviour or am I missing something?

master node config:
    cluster.name: cluster01
    node.name: authmattix
    node.master: true
    node.data: true
    node.ingest: true
    cluster.remote.connect: false
    node.attr.box_type: hot

data node config:
    cluster.name: cluster01
    node.name: pukeko
    node.master: false
    node.data: true
    node.ingest: false
    cluster.remote.connect: false
    node.attr.box_type: warm

Thanks in advance for any insight on this.
Cheers,
Camilo.

Unless you are also setting the index allocation to factor those in, Elasticsearch will try to balance everything across all nodes.

Are you using ILM?

No, I am not using ILM yet (that was my next step)

This is what I am seeing when I query the shards

GET /_cat/shards/filebeat-7.5.1*
filebeat-7.5.1-2020.05.06 0 p STARTED 360236 155.5mb 156.62.1.187 authmattix
filebeat-7.5.1-2020.05.05 0 p STARTED 119063  47.4mb 156.62.238.6 pukeko

But when try to relocate the index from the warm (pukeko) to the hot node (authmattix), I get the following error:

PUT /filebeat-7.5.1-2020.05.05
{
  "settings": {
    "index.routing.allocation.require.box_type" : "hot"
  }
}

Result:

{
  "error": {
    "root_cause": [
      {
        "type": "resource_already_exists_exception",
        "reason": "index [filebeat-7.5.1-2020.05.05/fAg2fEPGSaKiPzQyDUuxFg] already exists",
        "index_uuid": "fAg2fEPGSaKiPzQyDUuxFg",
        "index": "filebeat-7.5.1-2020.05.05"
      }
    ],
    "type": "resource_already_exists_exception",
    "reason": "index [filebeat-7.5.1-2020.05.05/fAg2fEPGSaKiPzQyDUuxFg] already exists",
    "index_uuid": "fAg2fEPGSaKiPzQyDUuxFg",
    "index": "filebeat-7.5.1-2020.05.05"
  },
  "status": 400
}

How can I force the shard to stay in the hot node until its time to reallocate to the warm node?

I am using logstash to ingest into elasticsearch and I've made some changes to incorporate ILM and seems to be workinf fine, but even my new indexes are replicating to my warm nodes. Currently the ILM has warm nodes disabled so not sure what else to look at. I tried to reallocate to the hots node and I've got the same error as my prvious post. running out of ideas :confused:

have found the solution, in case someone faces the same problem.

I have removed the data node attribute from my warm and cold nodes.