Nginx Module Dashboards "No data to display"

Hello

Newbie here. I'm running the ELK stack with all components at version 7.0.1 in an Ubuntu 18.04 environment.

My environment has nginx running in a docker container. my filebeat process is running on the docker host as a service.

The filebeat process ships all the nginx log messages to elasticsearch and they are visible in the Discover page in kibana.

My issue is that the nginx dashboards display no data:

My filebeat.yml:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: <myrepo>/nginx-master
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
            - module: nginx
              access:
                enabled: true
              error:
                enabled: true
    - type: docker
     templates:
        - condition:
            contains:
              docker.container.image: <myrepo>/angular:develop
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
            - module: nginx
              access:
                enabled: true
              error:
                enabled: true
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: <myrepo>/other-ui:develop
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
            - module: nginx
              access:
                enabled: true
              error:
                enabled: true
filebeat.inputs:
- type: log
  paths:
    - "/frontier/*.log"
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.kibana:
  host: "<my-kibana-host>:5601"
output.elasticsearch:
  hosts: "<my-elasticsearch-host>:9200"
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

So I'm not sure where this is falling apart. Does my nginx log have to be in some specific format?

Hi,

If you have data in discover at least your data are indexed.
Check the filebeat logs to see if you have parsing errors.
If so you can use simulate to debug and update the pipeline of your logs to match.
https://www.elastic.co/guide/en/elasticsearch/reference/7.0/simulate-pipeline-api.html

Hope it helps you,

Gabriel
Thanks this was helpful. This is what I found.

First, I checked the filebeat logs on the machine where the nginx docker container is running and found no parsing errors(at least not since I got the nginx logs to show up in discover).

Next I followed the simulate instructions. I grabbed a message from the nginx docker container's log and used it as the "docs" argument on the kibana dev tools page:

POST _ingest/pipeline/filebeat-7.0.1-nginx-access-default/_simulate
{
  "docs": [
    { "_source": {"log":"nnn.nnn.nnn.nnn - - [10/May/2019:14:45:53 +0000] \"GET /quote/api/application/list/person/nnnn?page=0\u0026size=5 HTTP/2.0\" 200 685 REF: \"https://my.company.com/quote/enrollment\" XFF: \"-\" RIP: \"-\" XFP: \"-\"\n","stream":"stdout","time":"2019-05-10T14:45:53.0611734Z"}
    }
    ]
}

The result:

{
  "docs" : [
    {
      "doc" : {
        "_index" : "_index",
        "_type" : "_doc",
        "_id" : "_id",
        "_source" : {
          "log" : """
192.168.117.152 - - [10/May/2019:14:45:53 +0000] "GET /quote/api/application/list/person/nnnn?page=0&size=5 HTTP/2.0" 200 685 REF: "https://my.company.com/quote/enrollment" XFF: "-" RIP: "-" XFP: "-"

""",
          "stream" : "stdout",
          "time" : "2019-05-10T14:45:53.0611734Z",
          "error" : {
            "message" : "field [nginx] not present as part of path [nginx.access.info]"
          }
        },
        "_ingest" : {
          "timestamp" : "2019-05-10T16:29:24.510006Z"
        }
      }
    }
  ]
}

So it looks like I do have a parsing error but it does not show in the logs.

Questions:

  • What I put in docs._source above was cut and paste directly from the nginx docker container log file(/var/lib/docker/containers//-json.log). I'd like to verify that is exactly what is getting passed to elasticsearch. Is there a debug setting in filebeat to see what is getting sent over the wire?
  • Must the nginx file format match what's in the patterns field of filebeat-7.0.1-nginx-access-default(a dash after remote_ip_list, a pipe after user.name)?

Okay

So I altered my nginx log format and now when I paste the message field into the dev tools page:

POST _ingest/pipeline/filebeat-7.0.1-nginx-access-default/_simulate
{
  "docs": [
    { "_source": {"message": "nnn.nnn.nnn.nnn - - [15/May/2019:14:39:02 +0000] \"GET /quote/api/billing-profile/person/2 HTTP/2.0\" 200 444 \"https://mycompany.com/quote/enrollment/application/estate/2344\" \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0\""}
    }
    ]
} 

I get success:

{
  "docs" : [
    {
      "doc" : {
        "_index" : "_index",
        "_type" : "_doc",
        "_id" : "_id",
        "_source" : {
          "nginx" : {
            "access" : {
              "remote_ip_list" : [
                "nnn.nnn.nnn.nnn"
              ],
              "time" : "15/May/2019:14:39:02 +0000"
            }
          },
          "http" : {
            "request" : {
              "method" : "GET",
              "referrer" : "https://mycompany.com/quote/enrollment/application/estate/2344"
            },
...

My problem is that the nginx filebeat dashboards still show no data.

Gabriel had asked if there were parsing errors in the filebeat log. I turned on debugging(filebeat -d "*") and I see no errors. But is that where I should be looking? Is filebeat-7.0.1-nginx-access-default processed by filebeat or elasticsearch?

If the answer is elasticsearch(and even if its not) how do I debug elasticsearch. I've found instructions on how to set the log level:

PUT /_cluster/settings
{
  "transient": {
    "logger.org.elasticsearch.transport": "trace"
  }
}

But how do I know which Java package to set the debug level on?

Hi,

After updating your nginx file format did you check in elastic if the data are correctly updated like the result you had in simulate?

If you have all your data correct in elastic but nothing in your dashboard check the visualize are linked to the correct index. Also check that your index have the pipeline attached.

For the logs, now that your data are correctly formatted you must have data in elastic so you can check here to confirm that all is ok.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.