I am new to elasticserach....the logstash is pulling the data from cluster and in kibana when viewing the data total filesystem size is different from the actual box...any suggestions as how to have to standardize the file system in logstash
i have to identify the max file size used by different dir within hdfs.. for that i using kibana ( vertical dashboard )...but filesize within hdfs are of different types ( mb, kb, gb etc) will kibana or logstash convert those file types?? if so how
You should feed elastic with consistent data.
If you store the sizes in elastic with different units, that is not nice, you may than hack it back in consistent form with painless scripts, when you display it.
Just do it in logstash or directly get all sizes in the same units ... you will save you a lot of work.
Thanks,,how do i do that in logstash..i have a file which is in mb, kb, gb how do i convert these files to mb and load into elasticsearch so that i get all file size with same unit..so that in kibana when i visualize the file size i get the sum of file size in mb
In kibana under index pattern i have set the filesize name with type as number and format as byte..because of which i am not getting total filesize...hence wants to convert files with different types ( kb, mb, gb ) to byte how to do that??
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.