Logstash kafka output plugin sent messages without error but messages were not received

Just started testing the logstash kafka output plugin, configuration as below.
output {
kafka {
bootstrap_servers => "ServerA:6667"
topic_id => "test3"

I enabled debug log, there are no errors, so I assume message were sent successfully, but they were not recevied at kafka side, I used kafka console consumer on the broker server to read the message from beginning (messages sent by console producer were read correctly, and test3 only has 1 partition)
{:timestamp=>"2016-10-06T17:33:06.295000+0800", :message=>"output received", :event=>{"@timestamp"=>"2016-10-06T09:32:33.105Z", "beat"=>{"hostname"=>"ServerB", "name"=>"ServerB"}, "count"=>1, "fields"=>nil, "input_type"=>"log", "message"=>"hello6", "offset"=>40, "source"=>"e:\temp\kafka\test.log", "type"=>"kafka", "@version"=>"1"}, :level=>:debug, :file=>"(eval)", :line=>"22", :method=>"output_func"}

Not sure how to test/check further, any suggestions? thanks in advance!

More information about the environment, there is only one kafka broker in the cluster, it's in version 0.9. it uses SASL_PLAINTEXT for the security protocol, but I didn't find a parameter in kafka output plugin can configure this. Kerberos was used in kafka cluster.

logstash version is 2.2.2.

I had a look at the git code repository, it seems that someone has put in the code to handle SASL and Kerberos for the kafka input plugin, but I couldn't find the similar code in the output plugin, am I correct to conclude that currently logstash kafka output plugin doesn't support kerberos and SASL?

so they have added sasl_ssl support to output plugin, but there are some gaps. last release my teammate downloaded had a bug with the protocol if ladder that sploded if you use sasl_ssl.
it's fixed upstream tho.
also, and a bigger problem is a silent failure problem if your config isn't correct. not even log4j will give you errors which is throwing me off. not sure whats up with that.
so it works, but can silently fail.

so yeah. totally hijacking this thread, but the original intent is still a valid complaint. it amounts to error handling/passing, and silent failures (a bad thing).

i didn't rtfm about https://www.elastic.co/guide/en/logstash/current/logging.html , so didn't know that you could control the plugin log4j's with the master file. but i added a log4j config to LS_JAVA_OPTS to turn on console output in order to get output from the jvm kafka client.

even set acks to 1 thinking that maybe it was just fire and forgetting things.

but basically, if you configure your kafka output plugin incorrectly (so many ways...) at all, it can silently fail if it's the right incorrect way. you have to blow up the producer constructor to get an error. but if the producer gets built, no helpful output.

like if you get the broker port wrong. or specify PLAINTEXT (or don't specify) instead of SASL_SSL (in our case).

one of the things that kills me (aside from not getting log4j errors) is that https://www.elastic.co/guide/en/logstash/current/node-stats-api.html doesn't report errors - it says messages are going through.

moral of the story, silent failure is a dealbreaker, even though i'm fairly in favor of using logstash for shipping. (filebeat doesn't support SASL_SSL.)

if this doesn't get fixed, we'll have to do something different, and maybe regrettable, and maybe not elastic.co .