[logstash.outputs.elasticsearch][.monitoring-logstash] An unknown error occurred sending a bulk request to Elasticsearch

Hi,

I installed logstash 7.4.2 via rpm. Logstash is accessing elasticsearch's rest api via haproxy, wich is loadbalancing between the elasticsearch nodes. TLS is tunneled from client, so no re-encryption on haproxy side. Kibana is using the same loadbalancer url and is working without issues so far.

Don't know if this matters, I use selfsigned certificates. All elasticsearch https endpoints are using the loadbalancer's address as CN.

When I run logstash, I have following reocurring errors in the logs:

[2020-02-11T14:41:04,525][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>"Broken pipe (Write failed)", :error_class=>"Manticore::UnknownException", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:in `block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:74:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278:in `block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:in `block in Pool'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:143:in `bulk_send'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:128:in `bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/common.rb:296:in `safe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/common.rb:201:in `submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/common.rb:169:in `retrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/common.rb:38:in `multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:in `multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:243:in `block in start_workers'"]}
[2020-02-11T14:44:15,024][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [https://logstash_system:xxxxxx@elastic-lb.internal.de:9201/][Manticore::ClientProtocolException] elastic-lb.internal.de:9201 failed to respond {:url=>https://logstash_system:xxxxxx@elastic-lb.internal.de:9201/, :error_message=>"Elasticsearch Unreachable: [https://logstash_system:xxxxxx@elastic-lb.internal.de:9201/][Manticore::ClientProtocolException] elastic-lb.internal.de:9201 failed to respond", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2020-02-11T14:44:15,027][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [https://logstash_system:xxxxxx@elastic-lb.internal.de:9201/][Manticore::ClientProtocolException] elastic-lb.internal.de:9201 failed to respond", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2020-02-11T14:44:17,034][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
[2020-02-11T14:44:19,309][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Restored connection to ES instance {:url=>"https://logstash_system:xxxxxx@elastic-lb.internal.de:9201/"}

The mentioned user logstash_system is used to push the monitoring data to elasticsearch.
But the curious thing is, that the logstash instance is showing up in kibana's monitoring module with fresh data.

My logstash.yml is this:
node.name: elastic03-metricbeat-0
path.data: /var/lib/logstash/metricbeat/0
log.level: info
path.logs: /var/log/logstash/metricbeat/0
xpack.monitoring.enabled: {LOGSTASH_XPACK_MONITORING_ENABLED} xpack.monitoring.elasticsearch.username: {monitoring_user}
xpack.monitoring.elasticsearch.password: {monitoring_password} xpack.monitoring.elasticsearch.hosts: {XPACK_MONITORING_ELASTICSEARCH_HOSTS}
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/etc/logstash/config_sets/metricbeat/certs/ca/elasticsearch/elastic.pem"
xpack.monitoring.elasticsearch.ssl.verification_mode: {LOGSTASH_XPACK_MONITORING_ELASTICSEARCH_SSL_VERIFICATION_MODE} xpack.monitoring.elasticsearch.sniffing: false xpack.monitoring.collection.interval: {LOGSTASH_XPACK_MONITORING_COLLECTION_INTERVAL}
xpack.monitoring.collection.pipeline.details.enabled: true

systemd environment variables are these:

SUFFIX_WEEKLY="%{+xxxx.}w%{+ww}"
SUFFIX_MONTHLY="%{+YYYY.MM}"
SUFFIX_DAILY="%{+YYYY.MM.dd}"
GLOBAL_GROK_PATTERN_DIR="/etc/logstash/config_sets/metricbeat/pipelines/_GLOBAL_GROK_PATTERN"
ES_HOSTS="elastic-lb.internal.dtpublic.de:9201"
USE_ES_SSL="true"
ES_CA_CERT_PATH="/etc/logstash/config_sets/metricbeat/certs/ca/elasticsearch/elastic.pem"
USE_ES_OUTPUT_SSL_CERT_VERIFICATION=true
REDIS_HOST="elastic-lb.internal.dtpublic.de"
REDIS_PORT="16380"
REDIS_DB="0"
REDIS_SSL_ENABLED="true"
XPACK_MONITORING_ELASTICSEARCH_HOSTS="elastic-lb.internal.dtpublic.de:9201"
LOGSTASH_XPACK_MONITORING_ENABLED="true"
XPACK_MONITORING_ELASTICSEARCH_SSL_CERTIFICATE_AUTHORITY="/etc/logstash/config_sets/metricbeat/certs/elasticsearch/elastic.pem"
LOGSTASH_XPACK_MONITORING_ELASTICSEARCH_SSL_VERIFICATION_MODE=certificate
LOGSTASH_XPACK_MONITORING_COLLECTION_INTERVAL=10s
LOGSTASH_XPACK_MONITORING_COLLECTION_PIPELINE_DETAILS_ENABLED=true
LOGSTASH_KEYSTORE_PASS="my-secure-password"

Passwords for certificates and users are stored in keystore.

How can I test the bulk post via curl to see if there are any issues from outside logstash? Or any better ideas?

Thanks, Andreas

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.