Logstash docker image keeps resetting elasticsearch URL

(Sfchrisgleason) #1

Hello all,

I am experimenting with bringing up an ELK stack across multiple physical machines using the docker images (Docker running independently on each machine). I'm using the official images from elastic using:

docker pull logstash
docker pull elasticsearch
docker pull kibana

I've gotten all three up, but whenever I try to ingest via beats, it's telling me in Kibana that there isn't any data in the filebeat-* index that matches the pattern.

On Logstash I keep seeing these lines when restarting:

18:30:42.309 [Ruby-0-Thread-8: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.8-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:136] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[http://172.x.x.x:9200/], :added=>[http://192..x.x.x:9200/]}}

The 172 address is what I've got specified in the es-output for logstash. I've also added that as network.host in elasticsearch.yml inside the ES docker container.

I don't have xpack installed at least I think, and I've set sniffing to false.

I haven't been able to get the url to stick.

Any ideas? Thanks ahead of time!

(Magnus Bäck) #2

I’m using the official images from elastic using:

docker pull logstash
docker pull elasticsearch
docker pull kibana

Unless you've configured your Docker Engine to use docker.elastic.co as your upstream registry you're not using the official Elastic images.

Are you restarting the containers at some point? How are you updating the configuration files?

(Sfchrisgleason) #3

Yeah sorry my eyes saw elastic in there somewhere, but it does look like they are docker.io ones

docker.io/elasticsearch latest 7516701e4922 13 days ago 315.5 MB

I am restarting the containers yes. That's when I'm seeing the es url replacement in the logs. After logstash says it has replaced the ES URL I'm seeing these messages:

05:16:16.713 [Ruby-0-Thread-8: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.8-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:136] WARN logstash.outputs.elasticsearch - Elasticsearch output attempted to sniff for new connections but cannot. No living connections are detected. Pool contains the following current URLs {:url_info=>{>{:in_use=>0, :state=>:dead, :last_error=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError: Could not reach host Manticore::SocketException: Connection refused (Connection refused)>, :last_errored_at=>2017-09-03 18:30:47 +0000}}}
05:16:17.477 [Ruby-0-Thread-7: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.8-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:224] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.1.x.x:9200/, :path=>"/"}

Here is my Elasticsearh output

output {
  elasticsearch {
    hosts => ["172.24.x.x:9200"]
    sniffing => false
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"


(Magnus Bäck) #4

Okay, but how are you updating the configuration files? Are they residing in a host-mounted directory or an otherwise persistent volume? How are you starting the containers?

(Sfchrisgleason) #5

Using docker exec -it container bash

then manually updating, then commiting the image so I can restart as necessary.

here are the commands I'm using for all three products currently (though have tried many different variations):

docker run --name elasticsearch -v "/opt/elasticsearch/esdata":/usr/share/elasticsearch/data -e network.host=172.24.x.x -p 9200:9200 -p 9300:9300 elasticsearch

sudo docker run -d --name kibana -p 5601:5601 -e ELASTICSEARCH_URL=http://172.24.x.x:9200 --net host kibana

sudo docker run -d --name="logstash" -v /etc/logstash/conf.d:/etc/logstash/conf.d:ro -p 5044:5044 -v /var/log/logstash:/host/var/log --net bridge logstash -f /etc/logstash/conf.d --debug


(Magnus Bäck) #6

Um, but you have a read-only host mount of /etc/logstash/conf.d. Are you actually able to modify files there? Are those changes visible in the host directory?

(Sfchrisgleason) #7

I feel like a big dope, but I found the source of my problem (at least this one).

I accidentally created two elasticsearch outputs and one was superceding the other with a sniffing => true

The ES errors have all gone away in logstash now.

Now I just need to work out the networking issues inbound to filebeat!

Thanks Magus!

(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.