Logstash output to kafka server

There is no option in the kafka output to treat oversized objects differently to other errors.

You could use an ugly hack that estimates the size by converting the object to a string and checking its length. This will drop events whose string representation is more than 230 bytes...

ruby {
    code => '
        if event.to_hash.to_s.length > 230
            event.cancel
        end
    '
}

You could write additional code to better approximate the size of the serialized data, all the way up to calling org.apache.kafka.common.serialization.StringSerializer yourself, although this has the overhead of doing the serialization twice.

1 Like