Nginx Logs Can't Be Parsed Because Symlinks

Currently I have an nginx container that has filebeat running in the background. [nginx:latest as base]

I have enabled the nginx module with filebeat modules enable nginx, my filebeat.yml has an input defined for the logs and has "symlinks: true" enabled.

When I attempt to run it though I get an error because the Harvester is unable to read the symlinked files [/var/log/nginx/access.log|error.log]

I included my setup being used for the nginx module itself and my input. I found where the Harvester does the check and I am unsure if it supports reading from symlink'd files. beats/harvester.go at main · elastic/beats · GitHub L555

My setup:
~/.modules.d/nginx.yml:

- module: nginx
  access:
    enabled: true
    var.paths: ["/var/nginx/access.log*"]
  error:
    enabled: true
    var.paths: ["/var/log/nginx/error.log*"]

filebeat.yml:

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/*.log
  symlinks: true

ls -lrt /var/log/nginx :

lrwxrwxrwx 1 root root 11 Jan 11 06:31 error.log -> /dev/stderr
lrwxrwxrwx 1 root root 11 Jan 11 06:31 access.log -> /dev/stdout

Error logs:

{"file.name":"log/input.go","file.line":556},"message":"Harvester could not be started on new file: /var/log/nginx/access.log, Err: error setting up harvester: Harvester setup failed. Unexpected file opening error: Tried to open non regular file: \"Dcrw--w----\" access.log","service.name":"filebeat","input_id":"c84db31c-482a-42c6-95b7-be6e57aa822c","source_file":"/var/log/nginx/access.log","state_id":"native::3-175","finished":false,"os_id":"3-175","ecs.version":"1.6.0"}

TLDR; I am attempting to have filebeat read from /var/log/nginx/error.log|access.log in an nginx container. I am currently getting errors because the files are symlinked. I believe that I have enabled this in the nginx module itself but still get errors from the Harvester.

Hello @Christian_Jacobs

Filebeat does not support non-standard files. Similar question can be found in this thread.

As an option you can consider running filebeat as a separate container. You can check this page for running filebeat in docker, or this for running in kubernetes.

Thank you for the reply @Tetiana_Kravchenko ,

Unfortunately I am working in a serverless container environment [AWS Fargate].
For the approach that you are suggesting I would create a Filebeat sidecar, a shared volume between the nginx container and filebeat.

In the linked Docker article, the filebeat.docker.yaml fiel includes the below portion:

filebeat.autodiscover:
  providers:
    - type: docker

I do not believe that this will work in Fargate because I do not have access to the underlying docker service. But I can test.

AWS supports the Firelense log driver for Fargate and using a Fluentd sidecar to gather stdout/stderr from all containers in the same Task.

Do you have a recommended setup for a AWS Fargate environment, or is the approach with creating a volume for all containers to dump into, then a filebeat sidecar the recommended approach?