Elasticsearch not reachable

I am trying to load CSV data into Elasticsearch using Logstash and visualize the data in Kibana, but when I run logstash config file, I am getting Elasticseach not reachable error, though my elasticsearch server is running fine.

Note: I can load the data in outout file using logstash.

Here is my logstash.config file:
input {
beats {
port => 5044
}
file{
path => "/populationbycountry19802010millions.csv"
start_position => "beginning"
sincedb_path => "directory/null"
}
}
filter {
csv{
separator => ","
columns => ["Country", "1980", "1981", "1982", "1983", "1984", "1985", "1986", "1987", "1988", "1989", "1990", "1991", "1992", "1993", "1994", "1995", "1996", "1997", "1998", "1999", "2000", "2001", "2002", "2003", "2004", "2005", "2006", "2007", "2008", "2009", "2010"]
}
mutate {convert => ["1980", float]}
mutate {convert => ["1981", float]}
mutate {convert => ["1982", float]}
mutate {convert => ["1983", float]}
mutate {convert => ["1984", float]}
mutate {convert => ["1985", float]}
mutate {convert => ["1986", float]}
mutate {convert => ["1987", float]}
mutate {convert => ["1988", float]}
mutate {convert => ["1989", float]}
mutate {convert => ["1990", float]}
mutate {convert => ["1991", float]}
mutate {convert => ["1992", float]}
mutate {convert => ["1993", float]}
mutate {convert => ["1994", float]}
mutate {convert => ["1995", float]}
mutate {convert => ["1996", float]}
mutate {convert => ["1997", float]}
mutate {convert => ["1998", float]}
mutate {convert => ["1999", float]}
mutate {convert => ["2000", float]}
mutate {convert => ["2001", float]}
mutate {convert => ["2002", float]}
mutate {convert => ["2003", float]}
mutate {convert => ["2004", float]}
mutate {convert => ["2005", float]}
mutate {convert => ["2006", float]}
mutate {convert => ["2007", float]}
mutate {convert => ["2008", float]}
mutate {convert => ["2009", float]}
mutate {convert => ["2010", float]}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
manage_template => false
index => "population-%{+YYYY.MM.dd}"
}
file{
path => "C:/services/elasticsearch/logstash-7.3.0/config/test-data/population/outlog2.log"
}
stdout {
codec => rubydebug
}
}

Here is my error log:

C:\services\elasticsearch\logstash-7.3.0\bin>logstash -f ../config/test-data/population/logstash.conf
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to C:/services/elasticsearch/logstash-7.3.0/logs which is now configured via log4j2.properties
[2019-08-16T12:21:48,929][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-08-16T12:21:48,939][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.0"}
[2019-08-16T12:21:50,225][INFO ][org.reflections.Reflections] Reflections took 27 ms to scan 1 urls, producing 19 keys and 39 values
[2019-08-16T12:21:52,616][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-08-16T12:21:52,756][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-08-16T12:21:52,792][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-08-16T12:21:52,795][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-08-16T12:21:52,811][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2019-08-16T12:21:52,897][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-08-16T12:21:52,900][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>750, :thread=>"#<Thread:0x6a01f497 run>"}
[2019-08-16T12:21:53,348][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-08-16T12:21:53,745][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-08-16T12:21:53,838][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-08-16T12:21:53,840][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-08-16T12:21:53,847][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-08-16T12:21:54,199][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-08-16T12:23:08,742][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2019-08-16T12:23:08,748][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2019-08-16T12:23:10,762][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}

HI,

Is your elasticsearch service is running?
Are you able to ping or telnet your elasticsearch node from logstash node?

Regards,
Harsh Bajaj

Hi,

Thanks for your reply.
Yes the ES service is running but I don't know how to ping ES node from Logstash node,
Kindly guide me.

Thanks

Happy to hear that! Your comment made mine.

I am not getting your comment :slight_smile:

Hi

you can run command as below from logstash node:
ping <elasticsearch host>
For telnet run below command:
telnet <IP> <port>

Regards,
Harsh Bajaj

1 Like

Dear I don't use any logstash node.

Hi,

Could you confirm your setup of ELK stack to understand more?

Regards,
Harsh Bajaj

1 Like

Thanks, I figured out that my connection to elastic cluster was lost.
When I restarted ES it works. :slight_smile:

Thanks for helping me.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.