This is a brand new ELK 5.0 install and I am a newbie. ES, Logstash, Kibana and Metricbeat (localhost metrics) installed on single server with xpack local authentication enabled and working as expected.. Attempting to collect syslogs from remote servers where filebeat is installed and sending to ELK server. Logstash is show records received from input and are successfully passed through the filter but nothing is going to output channel to ES. Every minute, the following message is logged in logstash log (/var/log/logstash/logstash-plain.log ):
[2016-11-02T07:02:43,814][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::Elasticsearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}
Nothing pertinent in ES log. Suggestions on where to investigate would be appreciated.
OS and ES versions
Ubuntu 16.04.1 LTS
ES v5.0.0
LS v5.0.0
/etc/logstash/conf.d/beats.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
user => "logstash_internal"
password => "password"
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}