Pod status stuck at Terminating when k8s node not ready

I try to simulate k8s environment fault to test eck operator self-healing ability, like stop kubelet or shutdown node.
But es pod will stuck at Terminating status when node goes down, is it by desgin?
And is there solution to solve k8s node fault by itself?
I know it's hard to handle, we can solve a part of at first.
Such as when the k8s nodes get failure and es nodes left cluster, operator try to schedul pod to other k8s node adter few minutes.

If a K8S node is partitioned for too long from the control plane then the Pods that were hosted on that node are in the "Unknown" state (except for DaemonSets in which case it is the "NodeLost" state) Not sure to understand in which case they are transitioning in a Terminating status.

There is no easy way to solve this problem automatically, being in an Unknown or Terminating state does not mean that the container is not running, even if the Pod has been forcibly deleted. Creating new Pods could be a costly operation on big clusters and some extra caution should be taken when dealing with master nodes.

I think terminating state because k8s try to evicts pod but node already can't response.
Maybe eck operator can use _cat/nodes api to check node left the cluster, and cluster health is green or yellow, then force delete pod after few minutes, statefulset will create a new pod to replase it.