Hi there,
I have a set of old Apache logs that I have in an existing Elastic Stack, where it was piped in to the Elasticsearch node using Filebeat with no modules enabled.
I would like to analyse this set of old Apache logs now - the best way I can think of doing so is to copy the original Apache log file (from the host machine itself), and pasting it into another machine. Then, I will fire off Filebeat (with Apache module enabled) which will (in my mind) pipe each line in this duplicate Apache log file to a new Elasticsearch instance.
I tried to do a mini-testing of the above concept, by using the exact same idea, but with just 10 lines of old Apache logs. I do not see any results in Dashboard for Filebeat index pattern, except for one because of an error.
The error.message
value is Provided Grok expressions do not match field value: [IP_Address - - [01/Dec/2021:00:00:00 +0000] \"\\x16\\x03\\x01\" 400 226]
Questions:
- How do I make Filebeat pipe in a large number of lines in a file (in my case, Apache web logs) from a fresh instance of Filebeat + Elasticsearch. If my approach is inefficient, I hope to learn a better way.
- Understandably the above Apache log is atypical, but it did appear in my log file. Probably very low priority but I guess it might be good to have a last-resort catch-the-rest grok pattern for such cases in the module's pipeline(?)