1.5.0 works from CLI, but not from Puppet elasticsearc/logstash module

I've been encountering the same problem with logstash 1.5rc4 and kafka 0.8.2. I installed logstash from the chef cookbook.

For 2 days, logstash would not read any data from kafka when started as a service, but when I started it from the command line, the data would stream in correctly.

I rebooted my kafka servers, and the problem appears to have mostly gone away. I suspect it was a zookeeper related problem.

http://stackoverflow.com/questions/29276912/kafka-suddenly-reset-the-consumer-offset#comment48755603_29276912

I can't reproduce it right now, but it smells like it is related to this:

Check your kafka logs. I had the following errors in kafka about the time that logstash was acting funny.

log4j, [2015-05-18T11:51:10.419] ERROR: kafka.consumer.ZookeeperConsumerConnector: [logstash_swat-logstash01-1431970927138-f1e70431], error during syncedRebalance
kafka.common.ConsumerRebalanceFailedException: logstash_swat-logstash01-1431970927138-f1e70431 can't rebalance after 4 retries
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:633)
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run(ZookeeperConsumerConnector.scala:551)
log4j, [2015-05-18T11:51:10.816] ERROR: kafka.consumer.ConsumerFetcherThread: [ConsumerFetcherThread-logstash_swat-logstash01-1431971470431-2d3a852c-0-1664195530], Current offset 135524246 for partition [ndstreamer,2] out of range; reset offset to 135944171

Also, check how many kafka consumers you have. You can only consume as many threads as you have kafka partitions. I set mine to 8, and give logstash 3 threads. That lets me connect 5 more consumers for debugging.