I'm currently in the throws of implementation and come across something confusing. Hopefully one of you experts can help me out.
First, my output code block which is in a separate file from the inputs: 90-ES_output.conf
output {
elasticsearch {
hosts => ["roundrobindns.mydomain.local:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
In this configuration, if I tail the logs, I get several (one every 2 seconds or so) of these messages:
Oct 26 19:10:09 myserver logstash[9785]: [2018-10-26T19:10:09,899][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
Prior to that logstash is happy to start on my default beats port and query my cluster of elasticsearch boxes. It doesn't seem that logstash functionality is inhibited or problematic with this error. However, if I drop the 'sniffing => true' from my code block, the 'tail'ed syslog no longer produces that error.
Is it by design that sniffing for new cluster nodes looks for the loopback?