Rally does not work for multiple Elasticsearch clusters

I have the following configuration to run esrally:
"target_hosts.json"

{
"default": [
{"host": "127.0.0.1", "port": 9200},
{"host": "127.0.0.1", "port": 9201}
],
"remote":[
{"host": "10.1.1.1", "port": 9200},
{"host": "10.1.1.2", "port": 9200}
]
}

And I made my multiple Elasticsearch clusters as follows:

curl -XPUT '127.0.0.1:9200/_cluster/settings' -H 'Content-Type: application/json' -d '
{
"persistent": {
"cluster": {
"remote": {
"node1": {
"seeds": ["10.1.1.1:9300"]
}
}
}
}
}'

and two clusters are connected successfully.

I used the following rally command to get the performance of the multiple Elasticsearch clusters:

esrally --offline --track=eventdata --target-hosts="target_hosts.json" --pipeline=benchmark-only --challenge=append-no-conflicts --track-params='number_of_shards:4, number_of_replicas:0'

rally start to benchmark the clusters, but unfortunately, it did not use the remote cluster and just put 4 shards on the local cluster and finished the benchmarking.
Could you please let me know, what is wrong with this configuration? Is there any link that I can follow to run this benchmark.

In terms of bulk indexing Rally will always use the cluster specified in default.

Out of the box, the node-stats and ccr-stats telemetry devices are capable of taking advantage of the remote cluster. See the note in Advanced topics/target-hosts:

All built-in operations will use the connection to the default cluster. However, you can utilize the client connections to the additional clusters in your custom runners.

So if you need additional operations done by Rally towards the remote cluster you can use a custom runner targeting the remote custom runners

You can also take a look at this recipe: https://esrally.readthedocs.io/en/stable/recipes.html#testing-rally-against-ccr-clusters-using-a-remote-metric-store