I am using ES-7.9.1. I created few ingest pipeling for small enrichment using UI. All pipeline working fine individually when load a data using logstash-output-elasticsearch. To impliment all the enrichment I combined 5 ingest pipeling into single pipeline and used it in logstash-output-elasticsearch plugin.
But getting following error.
[2020-10-28T08:28:14,042][WARN ][logstash.outputs.elasticsearch][main][8b15d2eec22acf181b52840d4936ea5f6a231ac7026b4f6f314b06b04f139394] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elastic:xxxxxx@server_name:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://elastic:xxxxxx@server_name:9200/, :error_message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@server_name:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2020-10-28T11:31:26,846][ERROR][logstash.outputs.elasticsearch][main][93fecca09435a6ec0a5624ad0c9131d67dd7f680e35db24c6150de34cfa21d51] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@server_name:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
My logstash configuration looks like :
output {
elasticsearch {
hosts => "${ES_SERVER}
index => "%{index_name}"
pipeline => enrichment_pipeline_name
user => "${ES_USER}"
password => "${ES_PASS}"
}
}
I found below article suggeting to reduce the worker but I am using default setting. not specifying any worker setting in my job.