First question: how have you configured replication for this cluster? That has a large effect on how much data is being stored per node.
Perhaps you have already read the blog post "How many shards should I have in my Elasticsearch cluster?", but if not it is probably worth a look.
Each shard has data that need to be kept in memory and use heap space. This includes data structures holding information at the shard level, but also at the segment level in order to define where data reside on disk. The size of these data structures is not fixed and will vary depending on the use-case. […] The more heap space a node has, the more data and shards it can handle.
Indices and shards are therefore not free from a cluster perspective, as there is some level of resource overhead for each index and shard.
In other words, you should expect that shards will use heap space even when they are not actively being queried or written to. Note that your heap dump shows many Elasticsearch "CacheSegment" objects. These may simply be what is required by your indices' mappings.
The reason I asked about replication is that it looks like your shard sizes are within the guidelines recommended by the "How many shards?" blog post, but if you have enabled replication, you may have more shards per node than recommended. It's very hard to say in the abstract, since so much depends on the data and mappings.
You may be interested in Index Lifecycle Management (ILM), a feature that was released after that blog post was written. If you have "old" indices that do not need to be queried frequently, you can let the cluster "freeze" them, which reduces the amount of heap space they use. Some links:
Does this mean that your cluster's heap usage is expected behavior? In truth, I do not know. It would help to know a little bit more about your index replication policy and your use case. Do all or most of the 650 indices have the same mappings, like you would see if you were indexing logs and breaking up your data by time?
I hope some of this is helpful to you.