Strange behaviour when using both Kafka output and input plugins

Hi all,

Am I able to use both, the output and input Kafka plugins on the same Logstash instance? It seems that the configuration for the consumer continuously loops through all messages in Kafka forever. I'm following a structure similar to this article, the diagram shown below:

Logstash (shipper) | --> Kafka --> | Logstash (indexer)
01.conf            |               | 02.conf

My 01.conf file is my producer to Kafka, and for testing it looks like this:

input {
  stdin { }
}

output {
  kafka {
    topic_id => "topicname"
    codec => json
  }
}

It will be light, only sending logs to Kafka.

The configuration to consume Kafka messages (02.conf) looks like this:

input {
  kafka {
    topics => "topicname"
    codec => json
  }
}

output {
  file {
    path => "/var/www/test.log"
  }
  # elasticsearch { }
}

For testing, I run the following to load both confs:
bin/logstash -f /etc/logstash/conf.d/

And I get a message that says:

Settings: Default pipeline workers: 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.0.4/vendor/jar-dependencies/runtime-jars/slf4j-log4j12-1.7.13.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.0.4/vendor/jar-dependencies/runtime-jars/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Pipeline main started

I'm running this on Ubuntu 14.04, Logstash 2.4, the plugin versions of Kafka are both 5.0.4, and Kafka 0.10.0.1.

If I test each config separately, I don't get this problem. It's only when I run both at the same time.

Any help would be greatly appreciated!

Thanks,
Ryan

When you have multiple files, LS merges them into one big one and then runs it.

I see.. so that causes the feedback loop.

What is the best way to solve this issue?

Ryan, you should be able to solve your particular problem by running two completely separate instances of logstash (two startup scripts, two JVM's, etc).

I have the opposite problem, I want to use logstash to read from Kafka, doctor some messages, and write them into another topic, and unfortunately I can't use the trick to avoid the multiple SLF4J bindings bindings issue (unless I use a socket or something unreliable to pass between two logstashes).

Hi Brian, thanks for the clarification. That's what I thought.