Unable to see nginx response time field on kibana

Hello Team,
Our architecture is Beats-->Logstash-->Elasticsaerch-->Kibana and version is 7.10.2 for whole ELK stack.

We are sending the nginx access logs on logstash using Filebeat nginx module and at logstash we are using logstash pipeline for parsing.

Logstash Pieline

Now we changed the nginx log format with time_combined to get the nginx response time in logs.

We are getting the nginx resposne time in logs and want to see same over kibana dashboard.

We have added one field in default logstash pipeline of nginx and restarted the logstash service as well as Grok Debugger is showing syntax is fine. Below are the nginx logs and my Grok Debugger

Nginx Log:

ffff:171.12.10.145 - - [31/May/2021:06:41:30 +0000] "PUT /api/v1/devices/ping.json HTTP/1.1" 200 20 "-" "okhttp/3.2.0" 0.032

Grok Pattern:

%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\" %{NUMBER:[nginx][response][time]}

But the nginx.response.time field is not visible on kibana.

Can we made changes in default logstash pipeline of nginx or its not possible?

Can you please help me on this issue?

Thank You

Hello Team,
Do we need to make the changes in in/usr/share/filebeat/module/nginx/access/ingest/pipeline.yml file also?

I have made the changes and grok part is looking like below:

- grok:
    field: message
    patterns:
    - (%{NGINX_HOST} )?"?(?:%{NGINX_ADDRESS_LIST:nginx.access.remote_ip_list}|%{NOTSPACE:source.address})
      - (-|%{DATA:user.name}) \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}"
      %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long}
      "(-|%{DATA:http.request.referrer})" "(-|%{DATA:user_agent.original})" %{NUMBER:nginx.response.time:float}

But still not getting nginx.response.time field on kibana.

Do we need to load the ingest pipeline again after making the changes in /usr/share/filebeat/module/nginx/access/ingest/pipeline.yml file?

filebeat setup --pipelines --modules nginx

Thank You

Hello Team,

Can you please help me?

Thank You

@Tek_Chand I will try to take a look.

There is a couple things either use Logstash Parsing or the Filebeat Module using both is probably not a great approach.

I would recommend using the filebeat modules approach.

I / We can show you how to use Logstash as a Pass Through if you still want to use it.

To answer this questions

The answer is yes and I would run
filebeat setup -e

This does all the setup, not just the pipeline (which is needed to use the dashboards etc.).

2nd So this is what I did

You will probably need to clean up all your old filebeat indices

I edited my

/usr/share/filebeat/module/nginx/access/ingest/pipeline.yml

This is my grok, I just added the response time.

- grok:
    field: message
    patterns:
    - (%{NGINX_HOST} )?"?(?:%{NGINX_ADDRESS_LIST:nginx.access.remote_ip_list}|%{NOTSPACE:source.address})
      - (-|%{DATA:user.name}) \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}"
      %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long}
      "(-|%{DATA:http.request.referrer})" "(-|%{DATA:user_agent.original})" %{NUMBER:nginx.response.time:float}
    pattern_definitions:
      NGINX_HOST: (?:%{IP:destination.ip}|%{NGINX_NOTSEPARATOR:destination.domain})(:%{NUMBER:destination.port})?
      NGINX_NOTSEPARATOR: "[^\t ,:]+"
      NGINX_ADDRESS_LIST: (?:%{IP}|%{WORD})("?,?\s*(?:%{IP}|%{WORD}))*
    ignore_missing: true
filebeat setup --pipelines
or
filebeat setup -e

Then I ran several of your logs from filebeat to elasticsearch and they showed up in Discover and search with the response time field

GET filebeat-7.13.0-2021.06.02-000001/_search

results look good to me.

{
  "took" : 5,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 4,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "filebeat-7.13.0-2021.06.02-000001",
        "_type" : "_doc",
        "_id" : "3bKxznkBX-iqkb9ifLWi",
        "_score" : 1.0,
        "_source" : {
          "container" : {
            "id" : "7.13.0"
          },
          "agent" : {
            "hostname" : "ceres-2.local",
            "name" : "ceres-2.local",
            "id" : "238d2135-8007-4394-81ed-9f6bc0404b81",
            "ephemeral_id" : "eb0cae88-4e0c-4afb-9eb4-fbff2c731116",
            "type" : "filebeat",
            "version" : "7.13.0"
          },
          "nginx" : {
            "access" : { },
            "response" : {
              "time" : 0.032
            }
          },
          "log" : {
            "file" : {
              "path" : "/Users/sbrown/workspace/elastic-install/7.13.0/filebeat-7.13.0-darwin-x86_64_mod/ngnix-discuss.log"
            },
            "offset" : 0
          },
          "source" : {
            "address" : "ffff:171.12.10.145"
          },
          "fileset" : {
            "name" : "access"
          },
          "url" : {
            "original" : "/api/v1/devices/ping.json"
          },
          "input" : {
            "type" : "log"
          },
          "@timestamp" : "2021-05-31T06:41:30.000Z",
          "ecs" : {
            "version" : "1.9.0"
          },
          "service" : {
            "type" : "nginx"
          },
          "host" : {
            "hostname" : "ceres-2.local",
            "os" : {
              "build" : "20F71",
              "kernel" : "20.5.0",
              "name" : "Mac OS X",
              "type" : "macos",
              "family" : "darwin",
              "version" : "10.16",
              "platform" : "darwin"
            },
            "ip" : [
              "fe80::61:c7fc:36f7:bf94",
              "192.168.2.205",
              "fe80::852:b5ee:c5b7:7ccb",
              "192.168.2.107",
              "fe80::9468:2cff:fe51:8711",
              "fe80::9468:2cff:fe51:8711",
              "fe80::74c7:8240:4766:aaf8",
              "fe80::4c2a:30f0:cff9:beeb",
              "fe80::aede:48ff:fe00:1122"
            ],
            "name" : "ceres-2.local",
            "id" : "CB562E90-69DE-5D41-AC64-4EEDC79D5CB0",
            "mac" : [
              "8c:85:90:ae:b0:b2",
              "82:de:c3:e6:d4:05",
              "82:de:c3:e6:d4:04",
              "82:de:c3:e6:d4:01",
              "82:de:c3:e6:d4:00",
              "a0:ce:c8:51:95:38",
              "82:de:c3:e6:d4:01",
              "0e:85:90:ae:b0:b2",
              "96:68:2c:51:87:11",
              "96:68:2c:51:87:11",
              "ac:de:48:00:11:22"
            ],
            "architecture" : "x86_64"
          },
          "http" : {
            "request" : {
              "method" : "PUT"
            },
            "response" : {
              "status_code" : 200,
              "body" : {
                "bytes" : 20
              }
            },
            "version" : "1.1"
          },
          "event" : {
            "ingested" : "2021-06-02T21:46:48.810524200Z",
            "timezone" : "-07:00",
            "created" : "2021-06-02T21:46:47.629Z",
            "kind" : "event",
            "module" : "nginx",
            "category" : [
              "web"
            ],
            "type" : [
              "access"
            ],
            "dataset" : "nginx.access",
            "outcome" : "success"
          },
          "user_agent" : {
            "original" : "okhttp/3.2.0",
            "name" : "okhttp",
            "device" : {
              "name" : "Other"
            },
            "version" : "3.2.0"
          }
        }
      },

Looks fine in Discover

So My suggestion is to get this working

nginx log -> Filebeat -> Elasticsearch

Then come back if you want to put logstash in the middle in that case you will just put it as a passthrough look at this thread here

nginx log -> Filebeat -> Logstash -> Elasticsearch

1 Like

@stephenb, Thank You for your helping hands.

All things were at place our end. But earlier we were running below command to ingest pipeline for nginx module and it was not working.

Now we used the below command to ingest the pipeline which is suggested by you:

And its working like a charm and its working without deleting the old filebeat indices.

We are using logstash as well and we add the filed nginx.response.time in my grok pattern for logstash in first post.

Thank You once again :slight_smile:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.