Configuring a node to only accept replica shards

I have a few hundred GB of indexes on my "production" node (just one machine in the cluster as it is for my hobby projects). I am trying to run benchmarks for five of the indexes, each of which is about 18 GB in 4 shards. I would like to test how much faster queries and aggregations I would get by joining my gaming laptop to the cluster but I don't want it to receive any master shards.

In my understanding I would lose data if the only existing shard (at the moment all indexes have zero replicas) is transferred to the laptop, I stop that node and delete the data folder. Am I correct?

New documents are constantly being indexed to other indexed which aren't part of this experiment.

I thought I could set "node.master" to false to block primary shards being transferred to the node but clearly this isn't the case, transferring starts immediately when the laptop joins the cluster. I didn't ever let this finish but instead shut down the laptop's node to prevent possible data loss.

Simply put: how do I prevent laptop node from receiving primary shards? I only want to transfer replica shards to the laptop and have all primary shards staying safely on the current master node.

I'm not sure if in this case it would be equivalent to having a setting "cluster.routing.allocation.enable = replicas", but it isn't supported.

Yes.

Short answers is, You can't.
Long answer, You can munge it with Cluster-level shard allocation and routing settings | Elasticsearch Guide [8.11] | Elastic

Thanks for the info, would it be appropriate to request (at GitHub) a new config to reject primary shards on a specific node? This might be useful even in some production circumstances such as with AWS spot instances.

Probably not give that forced awareness exists to, but you can try!