Hi,
i am new to elasticsearch, am i trying to calculate the HDFS space conumption via kibana..
i have a fsimage which contains the csv file from cluster..the file size is in byes and my code in logstash is as follows:
file {
path => '/etc/logstash/scripts/fsimage.csv.*'
start_position => "beginning"
type => "fsimage"
}
}
filter {
if [type] == "fsimage" {
csv {
separator => "|"
columns => [ "HDFSPath", "replication", "ModificationTime", "AccessTime", "PreferredBlockSize", "BlocksCount", "FileSize", "NSQUOTA", "DSQUOTA", "permission", "user", "group" ]
convert => {
'replication' => 'integer'
'PreferredBlockSize' => 'integer'
'BlocksCount' => 'integer'
'FileSize' => 'integer'
'NSQUOTA' => 'integer'
'DSQUOTA' => 'integer'
}
eg of a csv file:
XXXXXX|3|2016-12-3011:34|2016-12-3011:34|134217728|1|88807|0|0|-rw-r--r--|kbd_b9xf|hdfs
the problem is filesize which 88807 bytes is calculated only once but in hdfs total space is fs * replication count..how do i calculate filesize as fs * replication count in above script....