I'm new to filebeat and trying to understand how to setup a filebeat container to process the logs from several application containers each creating logs at /app/log/*.log
Will this work?
docker run -d -v /app/log --name FB filebeat-image
docker run -d --volumes-from FB --name APP1 app-image
docker run -d --volumes-from FB --name APP2 app-image
OR
docker run -v /app/log --name APP1 app-image
docker run -v /app/log --name APP2 app-image
docker run --volumes-from APP1 --volumes-from APP2 --name FB filebeat-image
I personally would recommend approach one because it works with dynamically adding and removing containers. In your second version you already need to know all containers on start up.
The version I would recommend is to write logs to somewhere on the host machine and mount this into a filebeat container so the logs are not "linked" to any container directly.
What happens to the state of the file when the filebeat container restarts? Does the newly created/restarted container resumes harvesting the file from where it has left?
Example:
Suppose, i start filebeat container and the container harvests 5 lines of log data i.e. /var/log/elasticsearch/elasticsearch.log of the host machine where the container is running. The filebeat container crashes/restarts. When the filebeat container is back, does the harvesting happens from 6th line?
This depends on where you store the registry file. If the registry file is on a volume that you will reuse, then it will continue. If the registry file disappears with the container, then not.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.