Logstash.outputs.elasticsearch is trying to connect to another elasticsearch

I'm following Running Logstash on Docker | Logstash Reference [5.X] | Elastic

here is my logstash's pipeline configuration:

# cat ./usr/share/logstash/pipeline/test.conf
input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => "elasticsearch1:9200"
    user => "elastic"
    password => "changeme"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
  stdout { codec => rubydebug }
}
#

and here are logs:

[INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.6.1-java/modules/arcsight/configuration"}
[INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"5556e81c-1552-4422-81bd-59f121141b2b", :path=>"/usr/share/logstash/data/uuid"}
[INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/]}}
[INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[INFO ][logstash.pipeline        ] Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
[INFO ][logstash.pipeline        ] Pipeline .monitoring-logstash started
[INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@elasticsearch1:9200/]}}
[INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@elasticsearch1:9200/, :path=>"/"}
[WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@elasticsearch1:9200/"}
[INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch1:9200"]}
[INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ][logstash.pipeline        ] Pipeline main started
[INFO ][org.logstash.beats.Server] Starting server on port: 5044
[INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
[INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
[ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}

why is logstash.outputs.elasticsearch trying to connect to elasticsearch:9200 instead of elasticsearch1:9200?

Please advise.

Check the monitoring configuration in logstash.yml.

If I understood correctly (from reading Running Logstash on Docker), by providing my own volume with pipeline configuration, I override existing configuration...

Correct, although logstash.yml is the settings file and not part of the pipeline configuration. If you provide additional details like how you're starting the container it'll be easier to help.

@magnusbaeck

I'm looking over Settings File | Logstash Reference [5.6] | Elastic and I'm unable to find which setting (variable) would I need to adjust to point logstash to another elasticsearch?

I'm getting same errors even if I comment out my pipeline volume, like following:

root@app11:/opt/elastic/logstash# grep -v ^# docker-compose.yml 

version: '3'
services:
        logstash:
                image: docker.elastic.co/logstash/logstash:5.6.2
                container_name: logstash11
root@app11:/opt/elastic/logstash#

This isn't the "elasticserach output" in your pipeline configuration.

This extra elasticsearch output is from Logstash x-pack monitoring which comes by default with our docker image.

The implementation details may explain the behavior:

  • Logstash is an ETL data flow system
  • Exporting metrics to Elasticsearch from Logstash requires a connector to Elasticsearch
  • The Logstash Elasticsearch output plugin is a well-travelled connector to Elasticsearch
  • So, for x-pack monitoring, Logstash will reuse the ELasticsearch output plugin as a library.

The impact:

  • If you have Logstash x-pack enabled, you will always have an Elasticsearch output (in a separate pipeline controlled by x-pack) that is exporting metrics to Elasticsearch.
  • This uses logstash_system as the user for x-pack Elasticsearch.

Hopefull this helps explain.

@magnusbaeck, thank you for such a detailed reply..

going by what you're saying, I went ahead and disabled xpack monitoring:

# grep -A1 environment docker-compose.override.yml 
                environment:
                        - "XPACK_MONITORING_ENABLED=false"
# 

and error is no longer there...

however, if xpack monitoring enabled is true, how does one adjust it to something that isn't elasticsearch?

Thanks in advance)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.