Logstash Close the Port after Transfer?

Does logstash closes the port after transfer?
I frequently encountering connection refused.
I have 3 server. 2 server was sending data to another server which compiled the data.
The size of the data was huge and it must took me a day or two to transfer the data.
But when I got back to it on anther day, there a connection error on the elasticsearch.
I tried to use telnet to test the connection of the open port but it says that connection refused

Right now I make a transfer bit by bit (like 1 index of es per transfer). After I transferred the specific index, my connection was lost and upon telnet, the connection was refused.

Is it the logstash who's making the port close?

When this happen, I just restart my server and it'll work again. :thinking: :thought_balloon:

Does logstash closes the port after transfer?

What transfer? Please be explicit.

Right now I make a transfer bit by bit (like 1 index of es per transfer). After I transferred the specific index, my connection was lost and upon telnet, the connection was refused.

If ES dies there should be clues about it in its logs.

I transfer elasticsearch data from one machine to another.

I don't think that the es is the one that dies. Since I can still send data to ES in intranet.

I still encountering the problem.

Just to clarify, I am transferring data to elasticsearch via logstash.
I am not sure if after the transfer, the issue arise Connection Refused.
When i try to use command netstat, i found out that the state is CLOSE_WAIT.
Does the logstash closes the port after transfer? or idling in after a couple of time?

logstash log::

[2017-09-07T15:50:38,163][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://192.168.200.2:10014/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://192.168.200.2:10014/][Manticore::SocketException] Connection refused"}
[2017-09-07T15:50:40,438][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>497, "stalling_thread_info"=>{"other"=>[{"thread_id"=>20, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>21, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>22, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>23, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}

TIA

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.