Logstash output to Kafka and ES too much to handle?

Hi I'm trying to get IIS logs to pump through logstash and output to both Kafka and Elasticsearch. IIS logs are coming in through a load balancer with a total of only about 2-4k logs per minute. Logstash runs smoothly for a while after it starts up. The CPU jumps while it catches up to the initial rush of logs but then steadies and remains pretty low. But after several minutes it just fails to output to either Kafka or ES. No errors are sent to the logs or stdout. Logstash continues to run. It's as if Logstash just gives up listening to those inputs.

Note that when I send only to ES or only to Kafka it seems to run smoothly with no issues. The issue is only when sending to both.

Current setup is just:

2 dedicated LS nodes (4 GB RAM) behind load balancer
3 ES nodes (8GB RAM, 4G heap)
3 Kafka nodes (8GB RAM, 4G heap)
ES & LS v2.2
Kafka v0.9.0.0

Config setup:
Input is simply TCP over one port
Filter is doing a simple mutation (rename fields) and date
Output: (tried the kafka output with and without the batch_size and linger_ms settings, same outcome)

output {
if [type] == "iis_log" {
kafka {
topic_id => "logstash.iis"
bootstrap_servers => "xxxx:9092"
retries => 3
compression_type => "snappy"
acks => "all"
batch_size => 65536
linger_ms => 5
client_id => "Logstash"
elasticsearch {
hosts => ["xxxx:9200"]
index => "logstash-iis-%{+YYYY.MM.dd}"

Is anyone sending outputs to both ES and Kafka in the same logstash instance like this? Are there any settings I should look at tweaking? Anywhere else I can look to see where the hiccup is occuring? It's frustrating that there's nothing in the logs. Any help would be appreciated.

Edit: I forgot to mention that we are considering changing to the architecture where we go from Logstash (no filtering) -> Kafka -> Logstash (w/ filtering) -> Elasticsearch/Kafka. Would something like that solve my issues due to the buffering of Kafka?