Hi,
I'm using logstash to write data from elasticsearch to hdfs. Everything is going well but th eproblem is when logstash write data in hdfs there is the following exception:
webhdfs write caused an exception: {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try.
My logstash config is the following:
input {
elasticsearch {
hosts => "x.x.x.x"
index =>"my_index"
}
}
filter{
mutate{
add_field =>{
"message"=>"%{status},%{msisdn},%{index}"
}
}
}
output {
webhdfs {
host => "0.0.0.0"
path => "/testLogstash/logstash-%{+yyyy-MM-dd}.log"
user => "flume"
flush_size => 6000
idle_flush_time => 60
}
stdout {}
}