Log message truncated at 32k

Hello Experts,
We have been using filebeat on our K8s1 as a daemonset. One of our containers spits our huge amount of log and it seems only a fraction of it makes it to logstash.
Upon more debugging we have seen the complete log was around 200+kb and only 32kb makes it to logstash.

Our ELK stack is at version - 6.2.4

filebeat config is

filebeat.registry_file: /etc/filebeat/filebeat_registr
filebeat.modules:
  path: /etc/filebeat/modules.d/*.yml
  reload.enabled: false
filebeat.config.prospectors:
  enabled: true
  path: /etc/filebeat/conf.d/*.yml

logging.to_files: true
logging.files:
  path: /etc/filebeat/log
  name: filbeat
logging.level: info

filebeat.autodiscover:
  providers:
    - type: kubernetes
      combine_partials: true
      templates:
        - condition:
            not:
              equals:
                kubernetes.container.name: filebeat-test
          config:
            - type: docker
              containers.ids:
                - "${data.kubernetes.container.id}"
              exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines
              multiline.pattern: '^[[0-9]{8}'
              multiline.negate: true
              multiline.match:  after
output.logstash:
  hosts: ['logstash:5044']
  bulk_max_size: 1024

Logstash config:

input {
  beats {
    port => 5044
  }
}
filter {
   if [kubernetes][container][name] == "test-log-container" {

   ruby {
      code => "
          File.open('/usr/share/logstash/debug.log','a') { |f| f.puts event.get('message') }
       "
    }
  }
}

output {
 elasticsearch {
    hosts => ["elasticsearch:9200"]
    user => "logstash"
    password => "logstash"

   index => "logstash-%{+YYYY.MM.dd}"

  }

Are we missing some config to send across the complete log? Added 'combine_partials: true' but that also doesnt seem to work.
Any help would be appreciated, thanks.

Hello, I've noticed the same thing for a while, also with Kubernetes / Openshift pod logs. Messages are getting truncated, resulting in failing Json filters.. Would be nice too know what exactly is truncating these logs and how we can workaround it.