Kibana with Logstash 5.0 elasticsearch 5.0 error

I setup ELK on Ubuntu 15. So I have some problems. Logs logstash returns:

[2016-11-08T02:00:02,607][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
[2016-11-08T02:00:02,607][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}

Logs filebeat:

2016-11-08T01:58:27+07:00 ERR Connecting error publishing events (retrying): read tcp xxxx:63855->logforwarder-server:5044: i/o timeout
2016-11-08T01:58:32+07:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1

Config logstash:

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5044 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:9300 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9600 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN

How to I solve my problems?

Thanks.

can you query elasticsearch?

ps aux | grep elastic

Also, if you can provide the log as well.

Hi w0lverine,
#ps aux | grep elasticsearch
elastic+ 1361 0.7 41.5 5837196 2537096 ? Ssl 01:32 3:36 /usr/bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -server -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-5.0.0.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet -Edefault.path.logs=/var/log/elasticsearch -Edefault.path.data=/var/lib/elasticsearch -Edefault.path.conf=/etc/elasticsearch


Logs elasticsearch

[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [aggs-matrix-stats]
[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [ingest-common]
[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [lang-expression]
[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [lang-groovy]
[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [lang-mustache]
[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [lang-painless]
[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [percolator]
[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [reindex]
[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [transport-netty3]
[2016-11-08T01:28:56,224][INFO ][o.e.p.PluginsService ] [GxcaZpM] loaded module [transport-netty4]
[2016-11-08T01:28:56,225][INFO ][o.e.p.PluginsService ] [GxcaZpM] no plugins loaded
[2016-11-08T01:28:59,072][INFO ][o.e.n.Node ] [GxcaZpM] initialized
[2016-11-08T01:28:59,073][INFO ][o.e.n.Node ] [GxcaZpM] starting ...
[2016-11-08T01:28:59,370][INFO ][o.e.t.TransportService ] [GxcaZpM] publish_address {ELK-IP-PUBLIC:9300}, bound_addresses {0.0.0.0:9300}
[2016-11-08T01:28:59,374][INFO ][o.e.b.BootstrapCheck ] [GxcaZpM] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2016-11-08T01:29:02,461][INFO ][o.e.c.s.ClusterService ] [GxcaZpM] new_master {GxcaZpM}{GxcaZpMyS7Gbi3eIC1RbAA}{oMlBSsTwTFiDnRb7Y2I9Fw}{ELK-IP-PUBLIC}{ELK-IP-PUBLIC:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2016-11-08T01:29:02,487][INFO ][o.e.h.HttpServer ] [GxcaZpM] publish_address {ELK-IP-PUBLIC:9200}, bound_addresses {0.0.0.0:9200}
[2016-11-08T01:29:02,487][INFO ][o.e.n.Node ] [GxcaZpM] started
[2016-11-08T01:29:03,088][INFO ][o.e.g.GatewayService ] [GxcaZpM] recovered [6] indices into cluster_state
[2016-11-08T01:29:05,626][INFO ][o.e.c.r.a.AllocationService] [GxcaZpM] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
[2016-11-08T01:31:12,626][INFO ][o.e.n.Node ] [GxcaZpM] stopping ...
[2016-11-08T01:31:12,715][INFO ][o.e.n.Node ] [GxcaZpM] stopped
[2016-11-08T01:31:12,715][INFO ][o.e.n.Node ] [GxcaZpM] closing ...
[2016-11-08T01:31:12,727][INFO ][o.e.n.Node ] [GxcaZpM] closed
[2016-11-08T01:31:14,061][INFO ][o.e.n.Node ] initializing ...
[2016-11-08T01:31:14,162][INFO ][o.e.e.NodeEnvironment ] [GxcaZpM] using [1] data paths, mounts [[/ (/dev/mapper/ubuntu--vg-root)]], net usable_space [32.6gb], net total_space [37gb], spins? [possibly], types [ext4]
[2016-11-08T01:31:14,162][INFO ][o.e.e.NodeEnvironment ] [GxcaZpM] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-11-08T01:31:14,194][INFO ][o.e.n.Node ] [GxcaZpM] node name [GxcaZpM] derived from node ID; set [node.name] to override
[2016-11-08T01:31:14,197][INFO ][o.e.n.Node ] [GxcaZpM] version[5.0.0], pid[1220], build[253032b/2016-10-26T05:11:34.737Z], OS[Linux/3.19.0-15-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_111/25.111-b14]

I think the problem is that your ES server advertises itself as 1.2.3.4:9200 but you've added localhost:9200 to the connection pool. With sniffing enabled Logstash will ask the ES cluster about which nodes are available, and since there's no localhost:9200 node in the cluster Logstash will remove it from the connection pool.

So, either disable sniffing or list the non-loopback address to the ES server.

Hi magnusbaeck,

My config in ES server not only config for local but also I config for all network:
network.host: 0.0.0.0

Set a custom port for HTTP:

http.port: 9200

This problem only with ELK 5.0, other versions work ok.

Never mind elasticsearch.yml, I'm talking about your Logstash configuration. Disable sniffing or use name-of-host:9200 instead of localhost:9200.

1 Like

Hi magnusbaeck,

Do not work if I use name-of-host:9200 but work ok when I disable sniffing.

Thanks,

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.