Error Outputting data to kafka

I want to publish my script data to kafka cluster using logstash. For that I configured logstash on centOS
hear is the config file i wrote:

input {
        exec {
                command => "python /home/elastic/"
                interval => 6
                codec => "json_lines"
output {
        kafka {
                codec => plain { format => "%{message}"}
                topic_id => "logstash"
                bootstrap_servers => "kafka-1:2181,kafka-2:2181,kafka-3:2181"
                codec => "rubydebug"

My kafka cluster is running elsewhere and this forwarding logstash machine does not have any kafka components

When I try to run this file I get following error:

log4j, [TS] WARN: Error in I/O with kafka-1/xx.xx.xx.xx Connection refused
at Method)
at Source)
at org.apache.kafka.clients.NetworkClient.poll(
at Source)

Please Suggest.

Can you telnet from the LS server to the kafka one?

What version of things are you running?

thanks @warkolm
My current configuration is :
Logstash: 2.2.0
OS: CentOS/RHEL 7 on every machine

[root@zx logstash-2.2.0]# nmap -v kafka-2 -p 2181

Starting Nmap 6.40 ( ) at 2016-03-02 00:57 EST
Initiating ARP Ping Scan at 00:57
Scanning kafka-2 [1 port]
Completed ARP Ping Scan at 00:57, 0.01s elapsed (1 total hosts)
Initiating SYN Stealth Scan at 00:57
Scanning kafka-2 [1 port]
Discovered open port 2181/tcp on xx.xx.xx.xx
Completed SYN Stealth Scan at 00:57, 0.01s elapsed (1 total ports)
Nmap scan report for kafka-2 ( xx.xx.xx.xx)
Host is up (0.00031s latency).
2181/tcp open unknown

Most likely the bootstrap servers are returning a different host name than
you expect.

Make sure your that your broker has set to what
the producers should use as that is the value returned via the bootstrap


My snippet is as follow:

# The port the socket server listens on

# Hostname the broker will bind to. If not set, the server will bind to all interfaces

# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "" if configured.  Otherwise, it will use the value returned from

All producers can ping the brokers using their hostname and the following lines are added in /etc/hosts file

xx.xx.xx.xx  kafka-2
xx.xx.xx.xx  kafka-3
xx.xx.xx.xx  kafka-1

I still can't figure out where the error lies?

Are you able to run the kafka console producer from the host giving you trouble?

The error implies that it couldn't reach the kafka-1 host so I'd at least check for errors when the message is being created on the host and perhaps ZK as well. Also are you seeing errors for all hosts? I'd expect that if it fails on one, it would go to the next one and keep working.

There is an issue with Kafka 0.8.2.X that blocks on sends when the broker is down. Try setting metadata_fetch_timeout_ms lower. To like 500 for your test.

PS Here is the issue I was talking about:<>

Hi, did you figure out the issue ? I ran into same issue and breaking my head for last four hours.