Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool

I am getting the following error in the log of logstash.

$ tail /var/log/logstash/logstash-plain.log
[2021-06-29T01:17:53,584][ERROR][logstash.outputs.elasticsearch][main][9acc851acb29a066b989c90cdd9ba461ce676a2f7643e20b77d037fb39a78bb9] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2021-06-29T01:17:53,584][ERROR][logstash.outputs.elasticsearch][main][afee06b36db38f43ad7b092fc093bdf86c899e5666e20d96b6b9a55fd9cca38e] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2021-06-29T01:17:53,584][ERROR][logstash.outputs.elasticsearch][main][ccd81b05f3abb937797857ab927b9f8affd1efa78061a0e1992ab12345a533c6] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2021-06-29T01:17:53,750][ERROR][logstash.outputs.elasticsearch][main][ccd81b05f3abb937797857ab927b9f8affd1efa78061a0e1992ab12345a533c6] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2021-06-29T01:17:57,647][ERROR][logstash.outputs.elasticsearch][main][afee06b36db38f43ad7b092fc093bdf86c899e5666e20d96b6b9a55fd9cca38e] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
[2021-06-29T01:17:57,686][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2021-06-29T01:17:57,686][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2021-06-29T01:17:57,686][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2021-06-29T01:18:01,259][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2021-06-29T01:18:01,860][ERROR][logstash.outputs.elasticsearch][main][afee06b36db38f43ad7b092fc093bdf86c899e5666e20d96b6b9a55fd9cca38e] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8}

It says that elasticsearch may be unreachable or down, but I don't see any such log in elasticsearch logs.

$ head -n 20 /var/log/elasticsearch/elasticsearch.log
[2021-06-29T00:00:00,904][INFO ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566648] overhead, spent [378ms] collecting in the last [1s]
[2021-06-29T00:00:14,858][INFO ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566661] overhead, spent [272ms] collecting in the last [1s]
[2021-06-29T00:00:21,864][INFO ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566668] overhead, spent [371ms] collecting in the last [1s]
[2021-06-29T00:00:29,050][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][young][6566674][304231] duration [1.2s], collections [1]/[2.1s], total [1.2s]/[1.3d], memory [10.2gb]->[8.6gb]/[16gb], all_pools {[young] [568mb]->[0b]/[0b]}{[old] [9.6gb]->[8.6gb]/[16gb]}{[survivor] [63.1mb]->[64mb]/[0b]}
[2021-06-29T00:00:29,051][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566674] overhead, spent [1.2s] collecting in the last [2.1s]
[2021-06-29T00:00:33,602][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][young][6566678][304232] duration [1s], collections [1]/[1.5s], total [1s]/[1.3d], memory [9.1gb]->[7.7gb]/[16gb], all_pools {[young] [440mb]->[8mb]/[0b]}{[old] [8.6gb]->[7.6gb]/[16gb]}{[survivor] [64mb]->[54.9mb]/[0b]}
[2021-06-29T00:00:33,606][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566678] overhead, spent [1s] collecting in the last [1.5s]
[2021-06-29T00:00:38,086][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][young][6566679][304233] duration [3.7s], collections [1]/[4.4s], total [3.7s]/[1.3d], memory [7.7gb]->[6.8gb]/[16gb], all_pools {[young] [8mb]->[8mb]/[0b]}{[old] [7.6gb]->[6.7gb]/[16gb]}{[survivor] [54.9mb]->[55.6mb]/[0b]}
[2021-06-29T00:00:38,087][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566679] overhead, spent [3.7s] collecting in the last [4.4s]
[2021-06-29T00:00:53,637][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][young][6566684][304234] duration [5.6s], collections [1]/[6.3s], total [5.6s]/[1.3d], memory [7.2gb]->[6gb]/[16gb], all_pools {[young] [472mb]->[8mb]/[0b]}{[old] [6.7gb]->[5.9gb]/[16gb]}{[survivor] [55.6mb]->[75.8mb]/[0b]}
[2021-06-29T00:00:53,639][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566684] overhead, spent [5.6s] collecting in the last [6.3s]
[2021-06-29T00:00:53,638][WARN ][o.e.h.AbstractHttpServerTransport] [ITS-ELS-02] handling request [null][POST][/_bulk][Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:41652}] took [5738ms] which is above the warn threshold of [5000ms]
[2021-06-29T00:00:53,639][WARN ][o.e.h.AbstractHttpServerTransport] [ITS-ELS-02] handling request [null][POST][/_bulk][Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:41564}] took [5738ms] which is above the warn threshold of [5000ms]
[2021-06-29T00:01:05,778][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][young][6566686][304235] duration [10.7s], collections [1]/[11.1s], total [10.7s]/[1.3d], memory [6.5gb]->[5.5gb]/[16gb], all_pools {[young] [504mb]->[8mb]/[0b]}{[old] [5.9gb]->[5.4gb]/[16gb]}{[survivor] [75.8mb]->[77.3mb]/[0b]}
[2021-06-29T00:01:05,789][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566686] overhead, spent [10.7s] collecting in the last [11.1s]
[2021-06-29T00:01:05,801][WARN ][o.e.h.AbstractHttpServerTransport] [ITS-ELS-02] handling request [null][POST][/_bulk][Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:41746}] took [10927ms] which is above the warn threshold of [5000ms]
[2021-06-29T00:01:13,294][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][young][6566690][304236] duration [4.3s], collections [1]/[4.5s], total [4.3s]/[1.3d], memory [6.2gb]->[5.6gb]/[16gb], all_pools {[young] [656mb]->[0b]/[0b]}{[old] [5.4gb]->[5.5gb]/[16gb]}{[survivor] [77.3mb]->[72mb]/[0b]}
[2021-06-29T00:01:13,295][WARN ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566690] overhead, spent [4.3s] collecting in the last [4.5s]
[2021-06-29T00:01:29,836][INFO ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566703] overhead, spent [329ms] collecting in the last [1s]
[2021-06-29T00:02:16,462][INFO ][o.e.m.j.JvmGcMonitorService] [ITS-ELS-02] [gc][6566749] overhead, spent [695ms] collecting in the last [1.5s]

You can also browse to elasticsearch (http://XXX.XXX.XXX.XXX:9200/) from your browser as follows.

{
  "name" : "ITS-ELS-02",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : ... snip ...,
  "version" : {
    "number" : "7.12.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "78722783c38caa25a70982b5b042074cde5d3b3a",
    "build_date" : "2021-03-18T06:17:15.410153305Z",
    "build_snapshot" : false,
    "lucene_version" : "8.8.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

what is it exactly?
what would be a good way to do it?

What is the ip/fqdn of your elasticsearch host?
Is it the same as in your output for logstash?

It looks like your logstash is trying to write to localhost:9200. Is that where your elasticsearch instance is running?

1 Like

Thank you for your answer.

Yes, I am aware that ElasticSearch and Logstash are deployed on the same server and communicate via port 9200.

$ lsof -i:9200
COMMAND     PID          USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
java        862      logstash   50u  IPv6 1877248      0t0  TCP localhost:60228->localhost:wap-wsp (ESTABLISHED)
... snip ...
java      28794 elasticsearch  307u  IPv6 1900044      0t0  TCP localhost:wap-wsp->localhost:60410 (ESTABLISHED)
... snip ...
2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.