Any luck on this one? I'm having the same issue right now.
As I see it, this is the relevant part for my Nginx container in docker-compose.yml:
nginx:
  container_name: nginx
  build: ./nginx-vts/.
  labels:
    co.elastic.logs/disable: false
    co.elastic.logs/module: nginx
    co.elastic.logs/fileset.stdout: access
    co.elastic.logs/fileset.stderr: error
filebeat:
  container_name: filebeat
  image: docker.elastic.co/beats/filebeat:7.0.0
  user: root
  volumes:
    - ${MY_DOCKER_DATA_DIR}/filebeat/nginx-access-ingest.json:/usr/share/filebeat/module/nginx/access/ingest/default.json
    - ${MY_DOCKER_DATA_DIR}/filebeat/filebeat.autodiscover.docker.yml:/usr/share/filebeat/filebeat.yml:ro
    - filebeat_data:/usr/share/filebeat/data
    - /var/run/docker.sock:/var/run/docker.sock
    - /var/lib/docker/containers/:/var/lib/docker/containers/:ro
    - /var/log/:/var/log/:ro
  environment:
    - ELASTICSEARCH_HOST=elasticsearch:9200
    - KIBANA_HOST=kibana:5601
    command: ["--strict.perms=false"]
My filebeat.yml is deemed Config OK on filebeat test config:
filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: true
filebeat.modules:
- module: nginx
filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true
      default.disable: true
output.elasticsearch:
  hosts: 'elasticsearch:9200'
setup.kibana:
  host: "kibana:5601"
I verified that the nginx module is enabled inside my filebeat container. Data is being sent to Elasticsearch, but I get the following error.message when I inspect a log entry in Kibana:
Provided Grok expressions do not match field value: [my.domain.com 192.168.1.1 - my_user [20/Apr/2019:20:21:37 +0000] \"GET /ocs/v2.php/apps/serverinfo/api/v1/info HTTP/2.0\" 200 764 \"-\" \"Go-http-client/2.0\"]
I've used the Kibana dev tools to evaluate the Grok-pattern I use for Nginx, that is present in the following file:
/usr/share/filebeat/module/nginx/access/ingest/default.json
My altered Nginx pattern works in the debugger. This is the content of my grok "patterns": [...] for my nginx access pipeline:
"\"%{IPORHOST:nginx.access.host} %{IPORHOST:nginx.access.remote_ip_list} - %{DATA:user.name} \\[%{HTTPDATE:nginx.access.time}\\] \"%{WORD:http.request.method} %{DATA:url.original} HTTP/%{NUMBER:http.version}\" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} \"%{DATA:http.request.referrer}\" \"%{DATA:user_agent.original}\"\""
The pattern I actually input in the debugger, and that works:
%{IPORHOST:nginx.access.host} %{IPORHOST:nginx.access.remote_ip_list} - %{DATA:user.name} \[%{HTTPDATE:nginx.access.time}\] \"%{WORD:http.request.method} %{DATA:url.original} HTTP/%{NUMBER:http.version}\" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} \"%{DATA:http.request.referrer}\" \"%{DATA:user_agent.original}\"
So what I'm looking for is how to debug my setup - or tips on what I'm doing wrong so that the fields are properly "nginx-parsed" when they enter Elasticsearch.
Sorry for hijacking the thread - but I hope OP already found the answer.