I need to digest logs offline into ELK for analysis. I'm a relative newbie to the ELK architecture, and floundering a bit. I would greatly appreciate any input
My problem is I have been sent a mass of historical log files that need to be ingested into ELK for analysis. This is a simple problem, but one that does not appear to receive much attention. All of the documents about ingesting logs are geared towards the more technically demanding real-time ingestion.
Initially, it looked like if I configured Filebeats to monitor a directory, started it, then dropped the log files into the directory, it would be able to digest them. Unfortunately, this does not appear to be the case. there are no signs that filebeats is seeing and processing the logs or that it is writing (any of) its output to the ELK system - filebeats posts its heartbeat to the command line but there is no output from the logstash sever.
Logstash is receiving and processing data streams from sockets connections.
What is the cleanest way to digest log files into ELK offline?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.