Logstash Kafka output storing topic offset option

Hello,

Due to some shared architecture that I have to deal with, I am asked to find out if logstash kafka output when writing to kafka topic, can save the offset in kafka as opose to the zookeper. Is there a way to do this?

Thanks in advance

Outputs don't use zookeeper, they jus write data. Inputs on Kafka 0.9 write
to Kafka bit the brokers require Zookeeper regardless of version. Logstash
5 beta supports Kafka 0.9

Awesome!! How stable https://www.elastic.co/blog/logstash-5-0-0-alpha1-released is this compared to beta? By the way when do you anticipate it to be release?

Hello,

Has any worked with kafka input in logstash-5.0.0 ??

This is what I have in my logstash-2.2.x config

kafka {
zk_connect => "zk1.test.com:2181, zk2.test.com:2181"
white_list => "test-logs-dc1"
decorate_events => true
codec => json
}
}

Is there a way for the plugin to tell to store offset in kafka as oppose to in zookeeper. And do I need to connect to zookeper at all, or i can just connect to producers?

  1. Kafka 0.9 consumer / producers is available in the logstash 2.x series.

  2. I am running kafka broker 0.10 in production with logstash 0.9 consumer/producer
    2a. logstash-input-kafka (3.0.2)
    2b. logstash-output-kafka (3.0.0)

  3. In 0.9 kafka by default stores the consumer offsets (i assume you were referring to consumer offset) in kafka topic

  4. Like Joe said, you still need zookeeper for the brokers

  5. Pay close attention to the logstash setting names. They have changed and people struggle with updating the setting names in their logstash config

Hi Allen,

Thanks for the reply. So would you suggest just to update the logstash-input-kafka and logstash-output-kafka plugin in logstash 2.x. At this point I would rather not to have to install logstash-5.0.0, as I will need to reconfigure number of configs.

Yes I was referring to consumer offsets.

Also, would you happen to have an example snippet where I could see how the kafka input and kafka output is setup please.

I am getting these errors when trying to connect with logstash 5.0.0 using my config

Unknown setting 'zk_connect' for kafka {:level=>:error}
Unknown setting 'white_list' for kafka {:level=>:error}
Unknown setting 'decorate_events' for kafka {:level=>:error}

What are the equivalent tags?

Thanks

You didn't mention which version of kafka your broker is running.
I believe logstash 5 comes with the pre-release versions 0.10 kafka input and output plugins.
I am waiting for the final release of 0.10 plugins before i play with them. So i cannot tell you that configuration.

config pages for 0.9
https://www.elastic.co/guide/en/logstash/master/plugins-inputs-kafka.html
https://www.elastic.co/guide/en/logstash/master/plugins-outputs-kafka.html

These are snippets of my 0.9 configs

output 0.9
kafka {
bootstrap_servers => "ip1:9092,ip2:9092,ip3:9092"
client_id => "serverName"
topic_id => "topicID"
compression_type => "snappy"
}

input 0.9
kafka {
codec => "plain"
topics => "topic"
bootstrap_servers => "ip1:9092,ip2:9092,ip3:9092"
client_id => "serverID"
group_id => "consumerGroupID"
}

My broker is running 0.9

Is 9092 port on which zookeeper is listening on, or is this for the broker?

if your broker is 0.9 then the 0.10 plugins would not work. Backward compatibility (ie 0.8 consumer on 0.9 broker) works but forward compatibility (ie 0.10 consumer on 0.9 broker) does not.

9092 is kafka broker port. The 0.9 consumer / producers dont need to connect to zookeeper.

My broker is 0.9, logstash-2.2.2 kafka input and output plugins in that case should be compatible, with the broker correct?

I will try to update logstash-2.2.2 kafka input/output plugins to 0.9

Otherwise, I was trying to use logstash 5 alpha4

I am running logstash 2.2.4 with logstash-input-kafka (3.0.2) logstash-output-kafka (3.0.0) and it worked with kafka 0.9 broker.

So I went ahead and installed the gems that you mentioned on my logstash-2.2.2 instance. Manually wrote some stuff using kafkacat to the test topic, and read from test topic connecting to brokers. It worked! Thanks alot for the help

Logstash startup completed
{
"message" => "test1",
"@version" => "1",
"@timestamp" => "2016-07-08T00:30:08.988Z"
}
{
"message" => "test2",
"@version" => "1",
"@timestamp" => "2016-07-08T00:30:08.991Z"
}
{
"message" => "test3",
"@version" => "1",
"@timestamp" => "2016-07-08T00:30:08.991Z"
}

No problem. Glad you are all setup.

I am running into an issue, with connecting to ZK, I would be able to get all the metadata for the topic. Now with with connecting to Kafka instead I can only get metadata for the message in the queue. Is there a way to connect to Zookeper but tell the offset to be stored in Kafka?

I guess is there a way to set decorate_events