I would be grateful if someone can explain behavior for following scenario.
k8s cluster with 3 master nodes 6 worker nodes. 3 nodes from workers are selected to run dedicated master ( these nodes have calculated spec for data in cluster) . If one k8s node failed, does eck re-schedule master on available nodes ? ( in this case scheduling on a node which already has a master pod running )
If a k8s node crashes/fails, and was hosting some Elasticsearch Pods, they will automatically be rescheduled on other available k8s nodes.
That's as long as there is another node available with compatible scheduling constraints. If you have special affinity rules or you are using local persistent volumes with affinity to a particular host, then the Pod will stay Pending.
3 nodes from workers are selected to run dedicated master
I guess you're specifying this through affinity rules in the podTemplate, or through a
nodeSelector? If those rules allow multiple masters to run on the same host then you should be fine.
Let me know if that makes sense. If not, can you share your elasticsearch.yml spec if it contains custom affinity rules?