Broken Pipe errors while talking to AWS-ElsaticSearch service

Hi all,

I am getting the following error:
"INFO: I/O exception (java.net.SocketException) caught when processing request to {}->http://:80: Broken pipe"
In logstash.err file during the same time in the logstash.log file I see:
"{:timestamp=>"2016-10-04T14:53:35.475000+0000", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http:///"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Broken pipe", :class=>"Manticore::SocketException", :level=>:error}"

I know that the AWS service is reachable from this EC2 instance because I can do a curl to the service to get the health check of the service. Also I see indexes created in the AWS service but after a while I do not see more documents added to the index.

What can cause this error?

AWS service for ElasticSearch is version 1.5
Logstash version 2.3.4.1
Configuration file:
input {
beats {
port => 5044
congestion_threshold => 180
}
}
filter {
if [beat][name] == "Raven" {
if [type] == "analytics" {
json {
source => "message"
}
}
}
}
output {
elasticsearch {
hosts => ":80"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
flush_size => 20
workers => 4}
}

hosts => ":80"

Try including a hostname here? Also, are you sure it's running on port 80 rather than the usual port 9200?

I do have a valid hostname I scrubbed it from the script. According to AWS documentation they are using http and https for communication so that would be ports 80/443. The problem I face is that I see some document loaded into ElasticSearch and then I get broken pipe errors every so often 3 to 6 a minute. Also AWS reduces the request you can send to it to a 100Mb per request. But how do I throttle this in logstash.

Thanks for info Magnus.