Sorry, if its a naive question.
Filebeat is configured to ship data to ES directly. Just incase ES is office and filebeat harvester found a log to ship, would it buffer, retry and ship?
here is what I tried, my docker container generated a file, filebeat got that log entry and reported saying sent 'x' events but ES wasn't reachable. I deleted the log file thinking filebeat got it buffered and then started ES. I dont see the logs coming thru.
How to handle this scenario?
Filebeat is not buffering any content. Filebeat works with back-pressure, slowing down file reading in case of endpoints not being available. If file is dropped + closed by filebeat (close_older = 1h), the file content is lost. filebeat does keep files open after files being deleted for a duration of close_older
.
Adding persistent queueing to all beats is some long time feature request, but it's currently not on the roadmap for 5.0 release. Even with queuing to disk, the queue/buffer needs to be bounded.
Docker's logging support (besides many plugins) is quite lacking and in my opinion very unsatisfying. Plus, docker is streaming logs right through the docker process in order to forward logs to logger plugins. With some plugins not trying to restrict resource usage. In very big setups with hundreds of containers that's a good for way asking for trouble.
Which logging options are supported by your application itself? Maybe forward logs via syslog or mounted volume to store logs in?
@steffens Thanks for the explanation. This is helpful.
This topic was automatically closed after 21 days. New replies are no longer allowed.