I'm running into an issue where I can't collect any of my custom Kubernetes container logs using the Kubernetes integration with the elastic agent. Elastic provided containers appear to ship logs correctly.
I'm looking for any info on how I may have misconfigured my setup.
Environment information:
ECK Operator in AWS EKS, with Elasticsearch, Kibana, Fleet Server and Elastic Agent.
I'm in a dev environment with two nodes in the node group, so 1 elasticsearch pod and 1 agent pod per node. Fleet server agent, Kibana, and ECK operator are running in Fargate.
I've verified that the fleet is running and the agents appear healthy. Filebeat correctly uploads system logs.
** Possibly relevant errors in **
filestream input with ID 'filestream-kubernetes.container_logs-a0965567-8a60-49a1-895e-276e2fd2a73d' already exists, this will lead to data duplication, please use a different ID
Could this be because there's two agents trying to implement the same filestream? Maybe related to this?: fix race condition when stopping inputs filestream ID bookkeeper by belimawr · Pull Request #32309 · elastic/beats · GitHub
DEPRECATED: Log input. Use Filestream input instead.
Is it possible that if the log file doesn't match the JSON format the parser expects it'll be ignored?
{"log.level":"INFO","@timestamp":"2022-07-19 06:22:59,717","log.origin":{"file.name":"/opt/venv/lib/python3.9/site-packages/flask_service/__main__.py","file.line":"20"},"message":"New log level: 20","ecs.version":"1.6.0"}
Happy to provide any yml's or other info.