How can I ingest pre-existing log files into Elasticsearch for analysis?

I need to digest logs offline into ELK for analysis. I'm a relative newbie to the ELK architecture, and floundering a bit. I would greatly appreciate any input

My problem is I have been sent a mass of historical log files that need to be ingested into ELK for analysis. This is a simple problem, but one that does not appear to receive much attention. All of the documents about ingesting logs are geared towards the more technically demanding real-time ingestion.

Initially, it looked like if I configured Filebeats to monitor a directory, started it, then dropped the log files into the directory, it would be able to digest them. Unfortunately, this does not appear to be the case. there are no signs that filebeats is seeing and processing the logs or that it is writing (any of) its output to the ELK system - filebeats posts its heartbeat to the command line but there is no output from the logstash sever.

Logstash is receiving and processing data streams from sockets connections.

What is the cleanest way to digest log files into ELK offline?

Logstash's file input, read mode may be what you are looking for. and

Read mode is relatively new, so you may need to use the latest version of Logstash, or update the file input plugin on an older Logstash version.

1 Like

This looks like exactly what i need. Looks like I'll be upgrading my ELK stack in hte near future.

Realtime or historic analysis, the ingestion process is exactly the same so don't worry too much :slight_smile:

Unfortunately, logstash doesn't seem to see the "dead" log files, which may be an artifact of failed tests setting a reference pointer somewhere.

When one switches from tail mode to read mode, the plugin still acts upon the last read positions recorded in the sincedb file.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.