Error received using WebHDFS Logstash Output

Constantly receive the following error with no indication on how to
resolve the issue and with a result that no events are actually pushed
to my hdfs. Can someone please provide some guidance?

Failed to flush outgoing items {:outgoing_count=>1,
:exception=>"WebHDFS::ServerError",
:backtrace=>["E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/webhdfs-0.7.4/lib/webhdfs/client_v1.rb:351:in
request'", "E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/webhdfs-0.7.4/lib/webhdfs/client_v1.rb:349:inrequest'", "E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/webhdfs-0.7.4/lib/webhdfs/client_v1.rb:270:in operate_requests'", "E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/webhdfs-0.7.4/lib/webhdfs/client_v1.rb:73:increate'",

"E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-2.0.2/lib/logstash/outputs/webhdfs.rb:184:in
write_data'",
"E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-2.0.2/lib/logstash/outputs/webhdfs.rb:179:inwrite_data'",

"E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-2.0.2/lib/logstash/outputs/webhdfs.rb:169:in
flush'", "org/jruby/RubyHash.java:1342:ineach'",
"E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-2.0.2/lib/logstash/outputs/webhdfs.rb:157:in
flush'", "E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:inbuffer_flush'", "org/jruby/RubyHash.java:1342:in each'", "E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:inbuffer_flush'", "E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159:in buffer_receive'",

"E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-2.0.2/lib/logstash/outputs/webhdfs.rb:144:inreceive'", "E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.0-java/lib/logstash/outputs/base.rb:81:in handle'", "E:/elk2/logstash-2.1.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.0-java/lib/logstash/outputs/base.rb:71:inworker_setup'"], :level=>:warn}

What does your output look like.

I have tried a variety of configurations with the same error being given. The Output of my conf file is as follows. My hadoop cluster is the one provided by the Oracle Big Data Lite VM which contains Cloudera 5. FYI I can GET files from the HDFS using the RESTClient plugin for Firefox using the required url but cant CREATE them.
output {
#webhdfs {
# host => "192.168.18.47"
# path => "/user/logstash/dt=%{+YYYY-MM-dd}/logstash-%{+HH}.log"
# user => "hue"
#}

webhdfs {
	workers => 2
    host => "192.168.18.47"
    port => 50070
    user => "oracle"
    path => "/user/logstash/dt=%{+Y}-%{+M}-%{+d}/logstash-%{+H}.log"
    flush_size => 500
    compression => "snappy"
    idle_flush_time => 10
    retry_interval => 0.5
	codec => json
}

}

Do you got any solution for this error?

No I'm afraid not. I ended up changing my solution architecture rather than waste any more time trying to solve this issue.

Hi @yachtsman60 ,

Thanks for the reply.

My architecture specifying sending Elasticsearch and HDFS same data. if Elasticsearch is down or got any unavoidable problem, then HDFS is the second resource where data can't lose.

Now I can't able to send data to the HDFS from Logstash.
can you suggest me that How can I achieve my Idea? are you doing the same( same architecture you have)?

cheers!
sah

Hi we are testing out logstash for ingesting data in Hadoop. And unfortunately facing the same issue. Any pointers on getting this to work would be great. I am able do a curl for webhdfs url but the same is not working from logstash.