Logstash to elasticsearch output error

Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>2146, :exception=>"Java::OrgElasticsearchClusterBlock::ClusterBlockException", :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:215)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:67)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:153)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

this is my output config of logstash

output{

stdout {}
elasticsearch{
host => "10.0.4.11"
index => "testunix_%{+YYYY.MM}"
}
}

input is:-
input{

kafka {

   zk_connect => "10.0.2.3:2181,10.0.2.4:2181,10.0.2.5:2181"

   topic_id => "UNIX"

   group_id => "consumer-unix"

   consumer_threads => 8

}

}

i can see lag in kafka-
root@kc-1:/opt/Kafka/kafka_2.10-0.10.0.1/bin# ./kafka-consumer-groups.sh --zookeeper 10.0.2.3:2181 --describe --group consumer-unix
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG OWNER
consumer-unix UNIX 0 12890893 13015325 124432 consumer-unix_li-unix-1479201508039-bb51a81c-0
consumer-unix UNIX 1 12904844 13015349 110505 consumer-unix_li-unix-1479201508039-bb51a81c-1
consumer-unix UNIX 2 11102383 11226473 124090 consumer-unix_li-unix-1479201508039-bb51a81c-2
consumer-unix UNIX 3 12002552 12115285 112733 consumer-unix_li-unix-1479201508039-bb51a81c-3
consumer-unix UNIX 4 11993651 12115312 121661 consumer-unix_li-unix-1479201508039-bb51a81c-4
consumer-unix UNIX 5 11106496 11226476 119980 consumer-unix_li-unix-1479201508039-bb51a81c-5
consumer-unix UNIX 6 11985305 12115286 129981 consumer-unix_li-unix-1479201508039-bb51a81c-6
consumer-unix UNIX 7 11985591 12115309 129718 consumer-unix_li-unix-1479201508039-bb51a81c-7
root@kc-1:/opt/Kafka/kafka_2.10-0.10.0.1/bin#

please help me in understand the problem and provide the solution

What's the health of your ES cluster? I strongly suggest you upgrade your Logstash to at least 2.0. If that's not doable, switch to HTTP for the elasticsearch output with protocol => "http".

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.