Logstash errors after playing around with x-pack


(Matt Oney) #1

I did start to add x-pack but am holding off for another week or two so I commented out in logstash.yml

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: http://X.X.X.X:9200

and removed any remnants of x-pack
899 sudo /usr/share/elasticsearch/bin/elasticsearch-plugin remove x-pack
900 sudo /usr/share/logstash/bin/logstash-plugin remove x-pack

and restarted all three components

but I'm still getting the errors -

[2017-10-03T15:17:45,045][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://10.x.x.x:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://10.x.x.x:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

Is there anything I can do?

Been looking at various posts like -

Thanks!


(Yu Watanabe) #2

@mathurin68

Error seems nothing to do with x-pack.

try connecting to elasticsearch from logstash server by below command and see if there is any response.

curl -XGET 10.x.x.x:9200

Make sure you get something like

[ywatanabe@host ~]$ curl -XGET 192.168.11.15:9200
{
  "name" : "ebBQY_o",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "f9NOyrQuQJSrJ_fITbhlSA",
  "version" : {
    "number" : "5.4.0",
    "build_hash" : "780f8c4",
    "build_date" : "2017-04-28T17:43:27.229Z",
    "build_snapshot" : false,
    "lucene_version" : "6.5.0"
  },
  "tagline" : "You Know, for Search"
}

(Matt Oney) #3

Thank you for the response! OK based on the troubleshooting with curl you suggested something stupid I did of course.

Alright, I've got it I don't think it's listening on that IP address -
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN 2032/node
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2278/nginx -g daemo
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2119/sshd
tcp6 0 0 127.0.0.1:9600 :::* LISTEN 26110/java
tcp6 0 0 ::1:9200 :::* LISTEN 23516/java
tcp6 0 0 127.0.0.1:9200 :::* LISTEN 23516/java
tcp6 0 0 :::5044 :::* LISTEN 26110/java
tcp6 0 0 10.x.x.x1:5140 :::* LISTEN 26110/java
tcp6 0 0 ::1:9300 :::* LISTEN 23516/java
tcp6 0 0 127.0.0.1:9300 :::* LISTEN 23516/java
tcp6 0 0 :::22 :::* LISTEN 2119/sshd

Is it better to leave ElasticSearch with localhost or set it to listen on the IP address?

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: localhost

Set a custom port for HTTP:

http.port: 9200

For more information, consult the network module documentation.

The only logs I have going into it are from Winlogbeats on a Windows Event Forwarder Server....

Thanks!


(Matt Oney) #4

Got it found this and it seemed to solve the problem -

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 127.0.0.1
http.host: 0.0.0.0

Set a custom port for HTTP:

http.port: 9200

For more information, consult the network module documentation.


(Yu Watanabe) #5

My thought is , from the security point of view, not exposing elasticsearch to directly local network or other networks are better because you could prevent any unexpected access from other network nodes.

However, if you are setting up a production environment, you might have to setup dedicated server only for elasticsearch to assure performance. In that case, you have couple of choices to secure elasticsearch.

One of choices will be application setting like ,

  1. Change port other that default 9200 or 9300
  2. Secure your elasticsearch node with x-pack

There are other choice will be using firewall , etc.


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.