Elasticsearch seems to appear unreachble or down


(Kenneth Mroz) #1

Im getting this error in my logstash logfile.
:message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but Elasticsearch appears to be unreachable or down!",
Data is still flowing through kibana though. I check service elasticsearch status and its running.

Logstash config:
output {
elasticsearch { hosts => ["127.0.0.1(I've tried this and localhost):9200"] }
stdout { codec => rubydebug }
}


(Mark Walkom) #2

And have you checked ES as well?


(Kenneth Mroz) #3

Yes, no error in the log.


(Mark Walkom) #4

Right, but what state is it in?


(Kenneth Mroz) #5

[2015-11-18 15:01:50,091][INFO ][discovery ] [William Stryker] elasticsearch/R3Ar2QRSR2io1HL9AwvuQw
[2015-11-18 15:01:53,116][INFO ][cluster.service ] [William Stryker] new_master {William Stryker}{R3Ar2QRSR2io1HL9AwvuQw}{127.0.0.1}{localhost/127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-11-18 15:01:53,150][INFO ][http ] [William Stryker] publish_address {localhost/127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2015-11-18 15:01:53,151][INFO ][node ] [William Stryker] started
[2015-11-18 15:01:53,185][INFO ][gateway ] [William Stryker] recovered [4] indices into cluster_state


(Luca Wintergerst) #6

can you post the output of a curl on "http://localhost:9200"?


(Kenneth Mroz) #7

{
"name" : "William Stryker",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}


(Luca Wintergerst) #8

Are there many tasks pending?
9200/_cluster/pending_tasks?pretty

Did you try to restart your cluster already?
If so, did the same problem occur again?

Did you try to restart your logstash?


(Luca Wintergerst) #9

and also, please post the output of the following:
...9200/_cluster/health?pretty=true


(Kenneth Mroz) #10

yes i restarted all services and no changes.

Output:
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 16,
"active_shards" : 16,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 16,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}


(Mike Simos) #11

Try running logstash with --debug and see if you get any other error messages. Also since you're using Elasticsearch 2.0, make sure you're using Logstash 2.0. Only Logstash 2.0 is compatible with Elasticsearch 2.0.


(Kenneth Mroz) #12

Default settings used: Filter workers: 4
Starting courier input listener {:address=>"0.0.0.0:9006", :level=>:info, :file=>"logstash/inputs/courier.rb", :line=>"102", :method=>"register"}
The error reported is:
input/courier: Failed to initialise: Address already in use - bind - Address already in use

Im getting this error but the port is linked to that logstash instance.


(Kenneth Mroz) #13

Any help on this issue?


(system) #14