We have a request to use ELK to access/fetch some log files stored on a Mapr cluster file system (HDFS or MFS), but don't know how and where to setup the ES for Hadoop...
Any suggestions on how to fetch (like filebeat does) log files on HDFS and send to Logstash (or Elasticsearch nodes)?
ES-Hadoop is only run from within data processing frameworks like Hadoop's Map Reduce, Hive, or Spark. If you want to use ES-Hadoop, you will need to have one of those frameworks installed to read the data off of HDFS/MFS before it is sent to Elasticsearch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.