Elasticsearch seems to appear unreachble or down

Im getting this error in my logstash logfile.
:message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but Elasticsearch appears to be unreachable or down!",
Data is still flowing through kibana though. I check service elasticsearch status and its running.

Logstash config:
output {
elasticsearch { hosts => ["127.0.0.1(I've tried this and localhost):9200"] }
stdout { codec => rubydebug }
}

And have you checked ES as well?

Yes, no error in the log.

Right, but what state is it in?

[2015-11-18 15:01:50,091][INFO ][discovery ] [William Stryker] elasticsearch/R3Ar2QRSR2io1HL9AwvuQw
[2015-11-18 15:01:53,116][INFO ][cluster.service ] [William Stryker] new_master {William Stryker}{R3Ar2QRSR2io1HL9AwvuQw}{127.0.0.1}{localhost/127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-11-18 15:01:53,150][INFO ][http ] [William Stryker] publish_address {localhost/127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2015-11-18 15:01:53,151][INFO ][node ] [William Stryker] started
[2015-11-18 15:01:53,185][INFO ][gateway ] [William Stryker] recovered [4] indices into cluster_state

can you post the output of a curl on "http://localhost:9200"?

{
"name" : "William Stryker",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}

Are there many tasks pending?
9200/_cluster/pending_tasks?pretty

Did you try to restart your cluster already?
If so, did the same problem occur again?

Did you try to restart your logstash?

and also, please post the output of the following:
...9200/_cluster/health?pretty=true

yes i restarted all services and no changes.

Output:
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 16,
"active_shards" : 16,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 16,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}

Try running logstash with --debug and see if you get any other error messages. Also since you're using Elasticsearch 2.0, make sure you're using Logstash 2.0. Only Logstash 2.0 is compatible with Elasticsearch 2.0.

Default settings used: Filter workers: 4
Starting courier input listener {:address=>"0.0.0.0:9006", :level=>:info, :file=>"logstash/inputs/courier.rb", :line=>"102", :method=>"register"}
The error reported is:
input/courier: Failed to initialise: Address already in use - bind - Address already in use

Im getting this error but the port is linked to that logstash instance.

Any help on this issue?