Failed to flush outgoing items, exception WebHDFS::IOError

I've been trying to send my logs from server by beat and to hdfs server in compressed format. And this gave me some warnings and logs seems not to be sent over to hdfs unlike it used to when I didn't use compress options.

There have been some questions with similar logs with mine but they've solved it through tuning with their hadoop backend or don't seem to have solved by any method.

can you help me out with this?

my log comes out as

Failed to flush outgoing items {:outgoing_count=>5000, :exception=>"WebHDFS::IOError",
:backtrace=>["/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:401:in request'", "/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:270:inoperate_requests'",
"/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:73:in create'", "/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs.rb:228:inwrite_data'",
"/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs.rb:211:in block in flush'", "org/jruby/RubyHash.java:1419:ineach'",
"/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs.rb:199:in flush'", "/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/buffer.rb:219:inblock in buffer_flush'",
"org/jruby/RubyHash.java:1419:in each'", "/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/buffer.rb:216:inbuffer_flush'",
"/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/buffer.rb:159:in buffer_receive'", "/home1/irteamsu/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs.rb:182:inreceive'",
"/home1/irteamsu/logstash-7.2.0/logstash-core/lib/logstash/outputs/base.rb:89:in block in multi_receive'", "org/jruby/RubyArray.java:1792:ineach'",
"/home1/irteamsu/logstash-7.2.0/logstash-core/lib/logstash/outputs/base.rb:89:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:inmulti_receive'",
"org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'", "/home1/irteamsu/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:239:inblock in start_workers'"]}

and my running conf is

input {
beats {
port => "5045"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
output {
webhdfs {
host => "10.105.178.60"
port => 50070
path => "/nbp-dev/taehong.kim/snappy/%{clientip}.log"
user => "taehong.kim"

    retry_interval => 90

    # add compress option
    compression => "snappy"
}
webhdfs {
    host => "10.105.178.60"
    port => 50070
    path => "/nbp-dev/taehong.kim/snappy/%{+MM-dd}_writing.log.snappy"
    user => "taehong.kim"
    codec => line { format => "%{+HH:mm:ss}" }

    retry_interval => 90
}

}

I've changed to snappy right now to check if snappy format works, but it doesn't look like it.