Filebeat is running under its own isolated namespace under Linux. This means that it has its own file system. Almost like it's running in it's own virtual machine.
Just like you cannot see filebeat from your nginx pods, you cannot see your nginx from your filebeat pod.
So how do you monitor logs on kubernetes with Filebeat?
How would you do it if they were two different VMS?
Well, you'd have to set up some access point where you expose the nginx logs to the filebeat container. Then you'd have to set up filebeat to read logs from that access point.
The typical way to do this is to have your nginx containers log to stdout, so that all logs get written to something like /var/lib/docker/containers on the k8s node, and then mount that dir from the host into the Filebeat container.
Then use Auto discover to monitor for nginx containers, grab their id, and start reading their log files Autodiscover | Filebeat Reference [8.17] | Elastic
There's a previous post here with an example Docker filebeat autodiscover not detecting nginx logs - #2 by shaunak
As an alternative to this, you could create a volume in kubernetes that you mount to all of your nginx pods, have your nginx containers log into this shared volume, and then mount the shared volume into your filebeat pods and setup monitoring of that shared location. This is the less preferred option for sure.