Logstash is losing connection to Elastic search

Hi,

My logstash is disconnecting to elasticsearch after 20 mins, elasticsearch is running and no error.

Here is the logs I have:

[WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://192.154.107.18:9200/][Manticore::SocketTimeout] Read timed out {:url=>https://192.168.205.162:9200/, :error_message=>"Elasticsearch Unreachable: [http://192.154.107.18:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [https://192.168.205.162:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}



logstash config:
input {
syslog {
port => 5000
host => "0.0.0.0"
}
}

filter {
if [port] == "5000" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
elasticsearch {
hosts => "https://192.168.205.162:9200"
index => "estack-test-pipeline-index"
user => "elastic"
password => "xxxxxx"
cacert => "/etc/logstash/newfile.crt.pem"
ssl_certificate_verification => false
}
stdout { codec => rubydebug }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.