@solijam Please see below:
Basically dont use a demon set as as you wont access individual pods logs, use a persistent volume to store all logs then mount that within each pod.
Then 1 instance of filebeats to read the persistent volumn files.
Above should work - but im getting an error with file timeout not reading new inputs - hopefully somebody will point us to right direction
Hey,
Here is the setup currently have half working - Kubernetes - Amazon K8 - with a instance of logstash and file beats running on two pods - no demon sets for filebeats as want to read files from a specific directory on each of the pods (logs are stored in a volume mounted persistent volume- specifically azure file storage) - not interested in the pods docker stout. Also persistent volume to hold data from log files delta ids for redeployments as not to loose filebeats data deltas for files.
…