Logstash sends to elasticsearch cluster failed

I have configured logstash output with three elasticsearch nodes which is in a cluster as below

        elasticsearch 
        { 
           hosts => ["http://ip1:9200","http://ip2:9200","http://ip3:9200"]
           document_id => "%{sessionid}"
           index =>  "index"
        }

For testing, if I manually stop one of the nodes I'm getting below error

[WARN ][logstash.outputs.elasticsearch][events] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ip1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ip1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

I thought even if one of the nodes is inaccessible , other nodes will get data from logstash

Isn't it expected behaviour?? How should I make sure events are getting written to active elasticsearch nodes, even if one of nodes is not reachable??

A 3-node cluster is a very peculiar situation. Now, how is the cluster configured? What is the minimum_master_nodes value? Because if you only have one master eligible node and that is the one you're shutting down, you obviously will have problems reaching the cluster. If you have a minimum_master_nodes equal to 2 and you shut down one of the two master eligible nodes, you will have problems as well. If all of your nodes are master eligible, with minimum_master_nodes equal to 1, if you shit down the master, with only 2 remaining master-eligible nodes you risk the `split-brain- phenomenon, which will cause problems as well.

I have configured as all are master eligible and data nodes as well. minimum_master_nodes set to 2. I'm in a situation where i cant use more than 3 nodes.Will that be fine??