Primary shard allocation on a server running multiple ES nodes

I have a cluster where I am running multiple es nodes on a single server. The cluster is composed of 16 servers, each running 4 es nodes. Are there any shard or index allocation awareness switches that would prevent multiple primary shards from living on the same server?

In other words, I'd like to tag the es nodes running on a server and make sure that only 1 primary shard is allocated to one of the es nodes on a server. The other es nodes on that server could contain replicas but not other primary shards.

I am seeing performance issues when bulk indexing a fairly large request, 10 mb, to an index that has multiple primary shards on a server. The performance of the server hosting the multiple primary shards degrades. I would like to balance my shards more evenly for an index across the different hardware in the cluster. An index has 5 shards and 1 replica.

Thanks.

To prevent multiple copies of the same shard to be allocated on the same physical server, you can use cluster.routing.allocation.same_shard.host (see here). This also ensures that only a single copy of that shard is affected when the physical server crashes.

Thanks. I'm not concerned with multiple copies of the same shard, we've got that under control. What I am trying to avoid is primary shard (P) P1 and P2 from being put onto the same server.

Lets say that I have 4 ES nodes N1, N2, N3 and N4 all running on server 1. I also have 4 es nodes running on servers 2-16.

If P1 gets assigned to N1 on server 1. I want to prevent P2 from getting assigned to N1, N2, N3 or N4 on server 1. I would like P2 to be assigned to an es node running on a different server, say server 2 where there are also no other primary shards on any of the es nodes on server 2, and so on and so forth....

From what I've read, I don't think what I am trying to do is possible, but just trying to see if I've missed something.

No, that's not possible. I don't understand how balancing primaries helps with indexing though as replicas get the same load as primaries.

Thanks for the quick response, much appreciated.

We can control where the replicas go much easier and make sure we know where they are placed, but I wasn't sure if you could do anything with regards to controlling primary allocation.

Again, thank you very much.