Filebeat gets "stuck" while loading inputs from configured path

Hi,

I'm running containerized Filebeat instances on Openshift cluster. I have MapR volume mounted into the Filebeat pods and Filebeat is instructed to read from those volumes.

I noticed that depending on the amount of objects (directories and files) Filebeat sometimes gets "stuck" when loading inputs (log message: "Loading Inputs: 1"). If loading inputs step is executed correctly the stdout log from Filebeat should print out all the configured harvest paths and conclude that with log message: "Loading and starting Inputs completed. Enabled inputs: 1". However that sometimes doesn't happen and from my observations it seems to depend on the amount of objects (in thousands). Because Filebeat gets "stuck" and doesn't error out or fail in any way, how can I make sure that it will eventually load the inputs? I don't want to create additional layer of Filebeats just to read stdout from real Filebeats in search for successful inputs load just to make sure that the microservice is working fine and doesn't need to be looked at. I was wondering if there are any tools or settings that can address this issue. I already looked at probing in Openshift but it doesn't address application state, only pod state.