Logstash error messages - Got error to send bulk of actions AND Failed to flush outgoing items

I see what looks like most if not all of my logs displayed in kibana but I keep getting these messages logged in /var/log/logstash/logstash.log

{:timestamp=>"2015-06-23T13:07:02.632000-0700", :message=>"Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];", :level=>:error}
{:timestamp=>"2015-06-23T13:07:02.632000-0700", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];, :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:210)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:73)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:148)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

I have logstash 1.5.1 and elasticsearch 1.6.0 installed on ubuntu 14.04 (the same machine). Logstash is reading from multiple files and then sending the logs to elastic search with this output config:

output {
elasticsearch {
host => "localhost"
cluster => "kibana"
flush_size => 2000
}
}

I have also tried setting the protocol to transport but I still receive these error messages. I don't know why I am getting all of these error messages if I see what looks like all my logs indexed in elasticsearch. All of these error messages say ":outgoing_count=>1" does that mean a single log is not getting passed to elasticsearch each time i see this message? Any assistance would be greatly appreciated

Levi

Have you renamed your cluster in elasticsearch.yml to match the cluster name in your Logstash configuration file?

Yes that has been changed and I can see a lot of my logs in kibana. I get those error messages once every 1-2 minutes and it always says ":outgoing_count=>1". Could there be a single message stuck in the queue?