Logstash-elasticsearch output plugin issue(UNEXPECTED POOL ERROR)


I have created a docker image based on elk 5.0.0. and used logstash-kafka input plugin. ELK server has started without any issue and but can't see any log on kibana. I am also able to curl elasticsearch (curl "http://***.***.***.***:9200/_search?size=10&pretty=true") but it's returning only one record.

After looking in logstash.log file, it seems there is some issue with Connection pool. Here is the log from logstash.log

[2016-11-03T14:39:26,555][INFO ][org.apache.kafka.common.utils.AppInfoParser] Kafka commitId : a7a17cdec9eaa6c5
[2016-11-03T14:39:26,609][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] Discovered coordinator (id: 2147483646 rack: null) for group logstash.
previously assigned partitions [] for group logstash
[2016-11-03T14:39:26,610][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] (Re-)joining group logstash
[2016-11-03T14:39:26,638][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://elk.marathon.mesos:9200"]}}
[2016-11-03T14:39:26,640][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["elk.marathon.mesos:9200"]}
[2016-11-03T14:39:26,641][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inf
[2016-11-03T14:39:26,646][INFO ][logstash.pipeline ] Pipeline main started
[2016-11-03T14:39:26,667][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2016-11-03T14:39:31,744][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>["http://elk.marathon.mesos:9200"], :added=>[]}}
[2016-11-03T14:39:36,746][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available co
[2016-11-03T14:39:36,747][WARN ][logstash.outputs.elasticsearch] Elasticsearch output attempted to sniff for new connections but cannot. No living connections are detected. Pool contains th
e following current URLs {:url_info=>{}}
[2016-11-03T14:39:41,748][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available co

I don't see any error on elasticsearch . here is elasticsearch.log:

[2016-11-03T14:39:13,607][INFO ][o.e.n.Node ] [] initializing ...
[2016-11-03T14:39:13,669][INFO ][o.e.e.NodeEnvironment ] [b6d9llv] using [1] data paths, mounts [[/var/lib/elasticsearch (/dev/mapper/centos-root)]], net usable_space [70.5gb], net total
_space [117.4gb], spins? [possibly], types [xfs]
[2016-11-03T14:39:13,669][INFO ][o.e.e.NodeEnvironment ] [b6d9llv] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-11-03T14:39:13,671][INFO ][o.e.n.Node ] [b6d9llv] node name [b6d9llv] derived from node ID; set [node.name] to override
[2016-11-03T14:39:13,673][INFO ][o.e.n.Node ] [b6d9llv] version[5.0.0], pid[52], build[253032b/2016-10-26T05:11:34.737Z], OS[Linux/3.10.0-327.36.1.el7.x86_64/amd64], JVM[Oracl
e Corporation/OpenJDK 64-Bit Server VM/1.8.0_91/25.91-b14]
[2016-11-03T14:39:15,764][INFO ][o.e.n.Node ] [b6d9llv] initialized
[2016-11-03T14:39:15,764][INFO ][o.e.n.Node ] [b6d9llv] starting ...
[2016-11-03T14:39:15,873][INFO ][o.e.t.TransportService ] [b6d9llv] publish_address {}, bound_addresses {[::]:9300}
[2016-11-03T14:39:15,876][INFO ][o.e.b.BootstrapCheck ] [b6d9llv] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2016-11-03T14:39:18,964][INFO ][o.e.c.s.ClusterService ] [b6d9llv] new_master {b6d9llv}{b6d9llvCRsiL2Fn5WvFlFQ}{NzDCJ8zPQ8C5IuUh3WX31g}{}{}, reason: zen-disco-el
ected-as-master ([0] nodes joined)
[2016-11-03T14:39:18,977][INFO ][o.e.h.HttpServer ] [b6d9llv] publish_address {}, bound_addresses {[::]:9200}
[2016-11-03T14:39:18,977][INFO ][o.e.n.Node ] [b6d9llv] started
[2016-11-03T14:39:19,193][INFO ][o.e.g.GatewayService ] [b6d9llv] recovered [0] indices into cluster_state
[2016-11-03T14:39:27,966][INFO ][o.e.c.m.MetaDataCreateIndexService] [b6d9llv] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [server, config]
[2016-11-03T14:39:43,888][INFO ][o.e.c.m.MetaDataMappingService] [b6d9llv] [.kibana/zJpF7CFkSp2GOKsCX9hFoQ] create_mapping [index-pattern]

any help



I have the exact same problem, except I have created a VM running Elasticsearch, Kibana and Logstash.
Elasticsearch and Kibana are working fine (I've got a few beats sending messages directly to Elasticsearch), but Logstash gives me the same errors with the empty connection pool.

[2016-11-03T18:12:06,273][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://foch-srv-log02:9200", ""]}}
[2016-11-03T18:12:06,276][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["foch-srv-log02", ""]}
[2016-11-03T18:12:06,278][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2016-11-03T18:12:06,285][INFO ][logstash.pipeline ] Pipeline main started
[2016-11-03T18:12:06,331][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2016-11-03T18:12:11,294][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>["http://foch-srv-log02:9200", ""], :added=>[]}}
[2016-11-03T18:12:12,014][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}


Can anyone help me out? It's really road blocker and have tight timeline for POC of elk stack.


still waiting to hear from someone regarding my issue.

(alex) #5

you just need make output like this

output {
elasticsearch { hosts => ["localhost:9200"] }

(Magnus Bäck) #6

I suspect you have sniffing enabled but it appears Logstash doesn't want to use the URL you've provided so it removes it from the pool. Maybe the node's advertised address doesn't match the address you're trying to use, giving the appearance that the node has been removed from the cluster? Anyway, disabling sniffing should help.

(Tom Milden) #7

I'm also seeing this issue. I don't have sniffing enabled (according to the Logstash documentation it's disabled by default right?) and I've made sure the nodes advertised address in my Elasticsearch config matches that of the address used in the Logstash elasticsearch output plugin. Are we missing some obvious config option somewhere?

Logstash docker image keeps resetting elasticsearch URL

In my case sniffing was enabled so after disabling that it worked.


here is my config:
output {
elasticsearch {
hosts => ["elk.###.mesos:9200"]
sniffing => false
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"

(Mathias) #10

I got the same problem.
I could also make logstash work by disabling sniffing.

I used to have it working with sniffing enabled then I started to play with x-pack and then things stopped working.
Removing x-pack did not cure the problem.

Br Mat

(John Sobanski) #11

sniffing => false in the output stanza fixed it for me as well

(Rohini Choudhary) #12

Thanks for your solution. Worked for me.


I'm getting the same problem with 5.1.2. Using just

hosts => ["myhost1:9200"]

alone, my Logstash job sends data to Elasticsearch correctly, but using

hosts => ["myhost1:9200"]
sniffing => true

I get the above error. The cluster has multiple nodes, with a mix of data, client and master nodes.

So is this a bug, and we can't use sniffing? Or have I not configured something properly?

(system) #14