Sending nginx logs from other pods to an ECK elasticsearch via Beats

I have followed the ECK quickstart and can get to run a 3 replicas of Elasticsearch on my k8s cluster, and kibana as well.
Next step is to configure filebeat and logstash to capture the nginx logs from my other pods in the cluster.
I followed this ECK Beats Quickstart to setup basic filebeat and it works fine.
Now I am trying to add the nginx module, but I can't seems to get it to access the nginx logs from other pods. Here is the only configuration that didn't break starting filebeat:

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: jsc-filebeat
  namespace: jsc-ns
spec:
  type: filebeat
  version: 8.17.2
  # elasticsearchRef:
  #   name: jsc-elasticsearch
  config:
    filebeat.inputs:
      - type: filestream
        id: nginx-filestream-id
        enabled: true
        paths:
          - /var/log/nginx/access.log
        fields:
          nginx: true
    output.logstash:
      hosts: ["jsc-logstash-ls-beats:5044"]
  daemonSet:
    podTemplate:
      spec:
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true
        securityContext:
          runAsUser: 0
        containers:
          - name: filebeat
            volumeMounts:
              - name: varlognginx
                mountPath: /var/log/nginx
        volumes:
          - name: varlognginx
            hostPath:
              path: /var/log/nginx

If I write directly in the /var/log/nginx/access.log file on the filebeat pod, all works fine. But what I am looking for is to be able to get also the nginx files from other pods. I thought the daemonSet would do that. What am I missing?

Filebeat is running under its own isolated namespace under Linux. This means that it has its own file system. Almost like it's running in it's own virtual machine.

Just like you cannot see filebeat from your nginx pods, you cannot see your nginx from your filebeat pod.

So how do you monitor logs on kubernetes with Filebeat?

How would you do it if they were two different VMS?

Well, you'd have to set up some access point where you expose the nginx logs to the filebeat container. Then you'd have to set up filebeat to read logs from that access point.

The typical way to do this is to have your nginx containers log to stdout, so that all logs get written to something like /var/lib/docker/containers on the k8s node, and then mount that dir from the host into the Filebeat container.

Then use Auto discover to monitor for nginx containers, grab their id, and start reading their log files Autodiscover | Filebeat Reference [8.17] | Elastic

There's a previous post here with an example Docker filebeat autodiscover not detecting nginx logs - #2 by shaunak

As an alternative to this, you could create a volume in kubernetes that you mount to all of your nginx pods, have your nginx containers log into this shared volume, and then mount the shared volume into your filebeat pods and setup monitoring of that shared location. This is the less preferred option for sure.