Trouble with Filebeat Nginx module

Filebeat/ELK version 7.10.2

We're attempting to consume nginx-esque kong logs from
through filebeat running on the same machine as the kong instance using the nginx module.

The following is out nginx.yml config in

- module: nginx
  # Access logs
    enabled: true
    var.paths: ["/usr/local/kong/logs/access.log*"]

the filebeat.yml looks like

logging.level: debug

  env: "tst"

  host: ""

  hosts: [ "" ]
  username: "elastic"
  password: "sompassword"

  ssl.certificate_authorities: ["/etc/pki/tls/certs/ca-crt.crt"]
  ssl.certificate: "/etc/pki/tls/certs/beat-crt.crt"
  ssl.key: "/etc/pki/tls/certs/beat-key.key"

filebeat.config.modules.path: "/etc/filebeat/modules.d/*.yml"

what we're seeing is that when filebeat starts up, it does a get to see if the pipeline is there
GET /_ingest/pipeline/filebeat-7.10.2-2022.09.14-000001

and if not it creates it with a PUT
it also appears to create the index
and it's alias filebeat-7.10.2

However, when I go to create the index pattern it brings back the alias and the index but then kibana tells me the payload's too large. I fixed that by increasing the server.maxPayloadBytes in the kibana.yml but even after that when I attempt to look at the docs in discovery, I get a Bad Request Error that all the shards have failed.

I can also not see any logs in the 'Logs' even though filebeat* is in the configuration for the Log indices.

I noticed when I turned the logging.level in filebeat to debug that if the ELK stack had no pipeline or alias or index that a huge request was sent from the filebeat to the service that appears to initialize the pipelines. Could this be the reason for the payload being too large and the shards failing

Any help would be greatly appreciated as we've been looking at this for a number of days now,

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.