Putting mass archived log files to Elasticsearch using Filebeat

Hello,

I am working in forensics and have a lot of log files, each 100MB in size.

I feel very uncomfortable using filebeat because it throws a lot of "too many open files" errors. I am not sure if every file is being processed.

I am aware of this article: Too many open file handlers | Filebeat Reference [8.11] | Elastic

Reading this article makes me wonder if filebeat is the right tool to import bulk archived files into elasticsearch:

Filebeat keeps the file handler open in case it reaches the end of a file so that it can read new log lines in near real time.

I have no active files, every file is closed and I don't want filebeat to held the file handler open.

So is there a beat that will process one file after another and not watches for active files?

I know there are many options in filebeat, but filebeat is a good active logfile reader and file watcher. And I don't need these functions for my archived log files.

Thank you
Chr1s

See the --once option on this page:

Within the input in filebeat.yml, you can configure close_eof and harvester_limit
see this page: Log input | Filebeat Reference [7.12] | Elastic

1 Like

Thanks for the --once option. That and the close_eof did the trick.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.