Error Outputting data to kafka

I want to publish my script data to kafka cluster using logstash. For that I configured logstash on centOS
hear is the config file i wrote:

input {
        exec {
                command => "python /home/elastic/test.py"
                interval => 6
                codec => "json_lines"
        }
}
output {
        kafka {
                codec => plain { format => "%{message}"}
                topic_id => "logstash"
                bootstrap_servers => "kafka-1:2181,kafka-2:2181,kafka-3:2181"
        }
        stdout{
                codec => "rubydebug"
        }
}

My kafka cluster is running elsewhere and this forwarding logstash machine does not have any kafka components

When I try to run this file I get following error:

log4j, [TS] WARN: org.apache.kafka.common.network.Selector: Error in I/O with kafka-1/xx.xx.xx.xx
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at org.apache.kafka.common.network.Selector.poll(Selector.java:238)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
at java.lang.Thread.run(Unknown Source)

Please Suggest.

Can you telnet from the LS server to the kafka one?

What version of things are you running?

thanks @warkolm
My current configuration is :
Logstash: 2.2.0
Kafka: 0.9.0.1
OS: CentOS/RHEL 7 on every machine

[root@zx logstash-2.2.0]# nmap -v kafka-2 -p 2181

Starting Nmap 6.40 ( http://nmap.org ) at 2016-03-02 00:57 EST
Initiating ARP Ping Scan at 00:57
Scanning kafka-2 [1 port]
Completed ARP Ping Scan at 00:57, 0.01s elapsed (1 total hosts)
Initiating SYN Stealth Scan at 00:57
Scanning kafka-2 [1 port]
Discovered open port 2181/tcp on xx.xx.xx.xx
Completed SYN Stealth Scan at 00:57, 0.01s elapsed (1 total ports)
Nmap scan report for kafka-2 ( xx.xx.xx.xx)
Host is up (0.00031s latency).
PORT STATE SERVICE
2181/tcp open unknown

Most likely the bootstrap servers are returning a different host name than
you expect.

Make sure your that your broker server.properties has host.name set to what
the producers should use as that is the value returned via the bootstrap
metadata.

See http://stackoverflow.com/a/20514389

My server.properties snippet is as follow:

...
# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=kafka-1

# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured.  Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
advertised.host.name=kafka-1
...

All producers can ping the brokers using their hostname and the following lines are added in /etc/hosts file

...
xx.xx.xx.xx  kafka-2
xx.xx.xx.xx  kafka-3
xx.xx.xx.xx  kafka-1
...

I still can't figure out where the error lies?

Are you able to run the kafka console producer from the host giving you trouble?

The error implies that it couldn't reach the kafka-1 host so I'd at least check for errors when the message is being created on the host and perhaps ZK as well. Also are you seeing errors for all hosts? I'd expect that if it fails on one, it would go to the next one and keep working.

There is an issue with Kafka 0.8.2.X that blocks on sends when the broker is down. Try setting metadata_fetch_timeout_ms lower. To like 500 for your test.

PS Here is the issue I was talking about: http://mail-archives.apache.org/mod_mbox/kafka-users/201507.mbox/<CAAUywg-je-pzDqVvOWndrRnpDkf42nTN5BB092ZpRiUSiY0vOw@mail.gmail.com>

Hi, did you figure out the issue ? I ran into same issue and breaking my head for last four hours.