My kafka cluster is running elsewhere and this forwarding logstash machine does not have any kafka components
When I try to run this file I get following error:
log4j, [TS] WARN: org.apache.kafka.common.network.Selector: Error in I/O with kafka-1/xx.xx.xx.xx
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at org.apache.kafka.common.network.Selector.poll(Selector.java:238)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
at java.lang.Thread.run(Unknown Source)
thanks @warkolm
My current configuration is :
Logstash: 2.2.0
Kafka: 0.9.0.1
OS: CentOS/RHEL 7 on every machine
[root@zx logstash-2.2.0]# nmap -v kafka-2 -p 2181
Starting Nmap 6.40 ( http://nmap.org ) at 2016-03-02 00:57 EST
Initiating ARP Ping Scan at 00:57
Scanning kafka-2 [1 port]
Completed ARP Ping Scan at 00:57, 0.01s elapsed (1 total hosts)
Initiating SYN Stealth Scan at 00:57
Scanning kafka-2 [1 port]
Discovered open port 2181/tcp on xx.xx.xx.xx
Completed SYN Stealth Scan at 00:57, 0.01s elapsed (1 total ports)
Nmap scan report for kafka-2 ( xx.xx.xx.xx)
Host is up (0.00031s latency).
PORT STATE SERVICE
2181/tcp open unknown
Most likely the bootstrap servers are returning a different host name than
you expect.
Make sure your that your broker server.properties has host.name set to what
the producers should use as that is the value returned via the bootstrap
metadata.
...
# The port the socket server listens on
port=9092
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=kafka-1
# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
advertised.host.name=kafka-1
...
All producers can ping the brokers using their hostname and the following lines are added in /etc/hosts file
Are you able to run the kafka console producer from the host giving you trouble?
The error implies that it couldn't reach the kafka-1 host so I'd at least check for errors when the message is being created on the host and perhaps ZK as well. Also are you seeing errors for all hosts? I'd expect that if it fails on one, it would go to the next one and keep working.
There is an issue with Kafka 0.8.2.X that blocks on sends when the broker is down. Try setting metadata_fetch_timeout_ms lower. To like 500 for your test.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.