Connection Refused to ElasticSearch

I am running Elk stack on docker. All services starts perfectly fine. However I am getting connection refused in logstash logs.

Here is my conf file for logstash:

input {
  file {
    path => "/tmp/*_log"
    start_position => "beginning"
  }
  elasticsearch {
    user => logstash_internal
    password => logstash
  }
}

output {
  elasticsearch {
    user => logstash_internal
    password => logstash
    hosts => ["elasticsearch:9200"]
  }
  stdout { codec => rubydebug }
}

Sample log:

[2017-11-07T00:35:28,957][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2017-11-07T00:35:28,959][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-11-07T00:35:29,216][INFO ][logstash.pipeline        ] Pipeline main started
[2017-11-07T00:35:29,279][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Elasticsearch user=>"logstash_internal", password=><password>, id=>"f266aacba8d15ed350736e5ee472c7da4536f879-2", enable_metric=>true, codec=><LogStash::Codecs::JSON id=>"json_99b755e1-2da1-4e2b-b4e0-f9b41ceeb0fa", enable_metric=>true, charset=>"UTF-8">, index=>"logstash-*", query=>"{ \"sort\": [ \"_doc\" ] }", size=>1000, scroll=>"1m", docinfo=>false, docinfo_target=>"@metadata", docinfo_fields=>["_index", "_type", "_id"], ssl=>false>
  Error: Connection refused - Connection refused
[2017-11-07T00:35:29,301][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-11-07T00:35:30,284][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Elasticsearch user=>"logstash_internal", password=><password>, id=>"f266aacba8d15ed350736e5ee472c7da4536f879-2", enable_metric=>true, codec=><LogStash::Codecs::JSON id=>"json_99b755e1-2da1-4e2b-b4e0-f9b41ceeb0fa", enable_metric=>true, charset=>"UTF-8">, index=>"logstash-*", query=>"{ \"sort\": [ \"_doc\" ] }", size=>1000, scroll=>"1m", docinfo=>false, docinfo_target=>"@metadata", docinfo_fields=>["_index", "_type", "_id"], ssl=>false>

I did create the logstash* user and roles following the guidelines listed at https://www.elastic.co/guide/en/x-pack/current/logstash.html

I can also curl to http://user:passwprd@elasticsearch:9200 within the logstash container. So I know it is reachable.

I am not sure where the problem is.

You haven't configured hosts for the elasticsearch input so maybe it's defaulting to localhost:9200 or something.

The issue was missing hosts as you suggested and I was suppose to use the user I've created for the reader role in the input block.

Thank you so much!

Hi, I am getting:
[2017-11-07T12:05:52,331][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://10.0.2.15:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://10.0.2.15:9200/'"}

Here is my logstash.conf file:
input {
elasticsearch {
...
user => logstash_internal
password => password
}
}
filter {
elasticsearch {
...
user => logstash_internal
password => password
}
}
output {
elasticsearch {
...
user => logstash_internal
password => password
hosts => ["10.0.2.15:9200"]
}
}

I have verified I can successfully authenticate using that username and password to the IP and port.

dev-user-1@ubuntu-16:~$ curl -X GET -u logstash_internal:password 'http://localhost:9200'
{
"name" : "node-1",
"cluster_name" : "my-application",
"cluster_uuid" : "UQgwamurRX-KMQPZKXsWfA",
"version" : {
"number" : "5.6.2",
"build_hash" : "57e20f3",
"build_date" : "2017-09-23T13:16:45.703Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"

Any ideas?

Furthemore, when I try to load that conf file I get this:

^Croot@ubuntu-16:/usr/share/logstash# bin/logstash -f /etc/logstash/conf.d/logstash.conf │tash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response co
2017-11-07 11:55:00,194 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling" for│de '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
logger config "root" │[2017-11-07T12:04:49,844][INFO ][logstash.outputs.elasticsearch] Running health check to see i
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logs│f an Elasticsearch connection is working {:healthcheck_url=>http://10.0.2.15:9200/, :path=>"/"
tash. You can specify the path using --path.settings. Continuing using the defaults │}
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using│[2017-11-07T12:04:49,876][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connec
default config which logs errors to the console │tion to dead ES instance, but got an error. {:url=>"http://10.0.2.15:9200/", :error_type=>LogS
root@ubuntu-16:/usr/share/logstash# bin/logstash -f /etc/logstash/conf.d/logstash.conf │tash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response co
2017-11-07 12:11:43,932 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling" for│de '401' contacting Elasticsearch at URL 'http://10.0.2.15:9200/'"}
logger config "root"

For my input block,I’ve provitamins different user which has different
privileges. E.g. logstash_reader role. Take a look at the url I’ve provided
in my initial post. It shows you how to create this user/role.

Yes I have that set up correctly and I have verified I can authenticate Ok

Hi if you look at what I entered above you will see the user and role were created and I was able to successfully authenticate.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.