Cluster stuck on high JVM heap usage

I have Elasticsearch 2.2.0, 1 node cluster, 4g heap size, RAM: 7GB, CPU cores: 2.
I also have configured indices.fielddata.cache.size: 40%.

The problem is when I am using Kibana to query some thing (very simple queries), if it a single query it`s working fine, but if I continue to query some more - elastic is getting so slow and eventually stuck because the JVM heap usage (from Marvel) is getting to 87-95%. It happens also when I trying to load some Kibana dashboard and the only solution for this situation is to restart the elastic service or clear all cache.

Why is the heap stuck like that?

You are running out of resources. How much data do you have in the cluster?

401 indices, 1,873 shards, 107,780,287 docs, total data 70.19GB.

but why indices.fielddata.cache.size: 40% isn't take care of the heap resources???

Field data cache is just one of the things that can take up heap.
You also have too many shards.

So look at reindexing to reduce the shard count, change your templates to also create fewer shards, and also upgrade to doc values.
Otherwise you need to add more resources to ES.