Connection to localhost:9200 refused

Hello all,

I am running a three node cluster with version 8.0.0. I start Elasticsearch nodes, they start without any errors.

but when I load data from Logstash it gives me following errors.

[2022-05-04T14:54:41,780][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect", :exception=>Manticore::SocketException, :cause=>org.apache.http.conn.HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect}
**[2022-05-04T14:54:53,952][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect"}**

I read a few issues and figured this errors could be about network.host setting in Elasticsearch.yml.

Right now, I have network.host is set to the IP address.

should it be 0.0.0.0 ?

Please let me know if you need more information.

Thank you,
Akhil

Hi,

please to share the config file.

Hi @ibra_013,

Here is my Elasticsearch.yml for master node

cluster.name: Search
node.name: fs-master

path.data: E:\LandingZone\Elastic\fs-master\Data
path.logs: D:\APPS\ELK8.0.0\elasticsearch-8.0.0\logs

network.host: 7.**.**.*3 (which is IP address of the server where master node is running)
http.port: 9200

discovery.seed_hosts: ["7.**.**.*1", "7.**.**.*2", "7.**.**.*3"] (IP addresses of all three nodes)
cluster.initial_master_nodes: ["7.**.**.*3"] (which is IP address of the server where master node is running))

xpack.security.enabled: false

Here is my logstash config file I use for data Ingestion.

input {
	stdin { 
		codec => line {
			charset => "UTF-8"
		}
	}
}

filter {
	# The fingerprint filter creates a unique identifier that is used as the document id. 
	# This creates a hash key based on the content message that is used as a unique id/key for each elasticsearch entry. 
	 
	fingerprint { 
		source => "message"
		target => "[@metadata][fingerprint]"
		method => "SHA1"
		# For the key we use the name of the index followed by the unique string on the first line of the csv data file.  
		key => "traveller_no_dups"
		base64encode => true
	}
	
	# Defines all the field in the csv file in the order they are found. 
	
    csv {
        separator => ","
		columns => [
			"SURNAME", 
			"FIRST_NAME", 
			"MIDDLE_NAME", 
			"BIRTHDATE", 
		]
	}

	# Add new DOB field will hold the BIRTHDATE content. 
	mutate {
		add_field => { "DOB" => "%{BIRTHDATE}" }
	}
	
	#	Process the birthdate as DOB. Convert the birthdate into a date value. 
	date {
		match => [ "DOB", "yyyyMMdd"]
		target => "DOB"
	}

	# remove all fields we dont need anymore. 
	mutate { 
		remove_field => [ "BIRTHDATE" ]
	}	
}

output { 
     elasticsearch {
            action => "index"
            hosts => "localhost:9200"
            index => "traveller_no_dups"
			document_id => "%{[@metadata][fingerprint]}"
       }
        stdout {codec => rubydebug}
#        stdout {}
}

Hi,

in logstash you have to put the IP address of Elasticsearch.

output { 
     elasticsearch {
            action => "index"
            hosts => ["https://7****:9200", "https://7***:9200","https://7***:9200"]
            index => "traveller_no_dups"
			document_id => "%{[@metadata][fingerprint]}"
       }
        stdout {codec => rubydebug}
#        stdout {}
}

Hi @ibra_013,

Can I just put IP address of my master node ? because I check Elasticsearch and Kibana status with that IP address.

Thanks,
Akhil

Hi,

if Logstash is in a different server then you have to.

Hi @ibra_013,

Logstash is on the same server where my master node is running.

so, I will try putting my master node's IP first.

Thanks for the explanation. I really appreciate it.