You may be thinking about [allocation filtering] (https://www.elastic.co/guide/en/elasticsearch/reference/current/shard-allocation-filtering.html), which uses tags like "node.rack". But that's for controlling where shards are allocated.
Each individual JVM that is running will be it's own node, nothing special needs to be done. So if you execute ./bin/elasticsearch twice (with same or different config, or entirely different directories), you'll have two ES nodes running on that machine.
What you may be remembering is that shard filtering can be useful if you have two nodes on the same machine in the same cluster. In that case, you may want to setup zones so that primaries/replicas don't land on the same physical machine (but on two different nodes... that just happen to be on one machine).
But since you're running two entirely separate clusters, you don't have to worry about those kinds of things 