@Tek_Chand I will try to take a look.
There is a couple things either use Logstash Parsing or the Filebeat Module using both is probably not a great approach.
I would recommend using the filebeat modules approach.
I / We can show you how to use Logstash as a Pass Through if you still want to use it.
To answer this questions
The answer is yes and I would run
filebeat setup -e
This does all the setup, not just the pipeline (which is needed to use the dashboards etc.).
2nd So this is what I did
You will probably need to clean up all your old filebeat indices
I edited my
/usr/share/filebeat/module/nginx/access/ingest/pipeline.yml
This is my grok, I just added the response time.
- grok:
field: message
patterns:
- (%{NGINX_HOST} )?"?(?:%{NGINX_ADDRESS_LIST:nginx.access.remote_ip_list}|%{NOTSPACE:source.address})
- (-|%{DATA:user.name}) \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}"
%{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long}
"(-|%{DATA:http.request.referrer})" "(-|%{DATA:user_agent.original})" %{NUMBER:nginx.response.time:float}
pattern_definitions:
NGINX_HOST: (?:%{IP:destination.ip}|%{NGINX_NOTSEPARATOR:destination.domain})(:%{NUMBER:destination.port})?
NGINX_NOTSEPARATOR: "[^\t ,:]+"
NGINX_ADDRESS_LIST: (?:%{IP}|%{WORD})("?,?\s*(?:%{IP}|%{WORD}))*
ignore_missing: true
filebeat setup --pipelines
or
filebeat setup -e
Then I ran several of your logs from filebeat to elasticsearch and they showed up in Discover and search with the response time field
GET filebeat-7.13.0-2021.06.02-000001/_search
results look good to me.
{
"took" : 5,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 4,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "filebeat-7.13.0-2021.06.02-000001",
"_type" : "_doc",
"_id" : "3bKxznkBX-iqkb9ifLWi",
"_score" : 1.0,
"_source" : {
"container" : {
"id" : "7.13.0"
},
"agent" : {
"hostname" : "ceres-2.local",
"name" : "ceres-2.local",
"id" : "238d2135-8007-4394-81ed-9f6bc0404b81",
"ephemeral_id" : "eb0cae88-4e0c-4afb-9eb4-fbff2c731116",
"type" : "filebeat",
"version" : "7.13.0"
},
"nginx" : {
"access" : { },
"response" : {
"time" : 0.032
}
},
"log" : {
"file" : {
"path" : "/Users/sbrown/workspace/elastic-install/7.13.0/filebeat-7.13.0-darwin-x86_64_mod/ngnix-discuss.log"
},
"offset" : 0
},
"source" : {
"address" : "ffff:171.12.10.145"
},
"fileset" : {
"name" : "access"
},
"url" : {
"original" : "/api/v1/devices/ping.json"
},
"input" : {
"type" : "log"
},
"@timestamp" : "2021-05-31T06:41:30.000Z",
"ecs" : {
"version" : "1.9.0"
},
"service" : {
"type" : "nginx"
},
"host" : {
"hostname" : "ceres-2.local",
"os" : {
"build" : "20F71",
"kernel" : "20.5.0",
"name" : "Mac OS X",
"type" : "macos",
"family" : "darwin",
"version" : "10.16",
"platform" : "darwin"
},
"ip" : [
"fe80::61:c7fc:36f7:bf94",
"192.168.2.205",
"fe80::852:b5ee:c5b7:7ccb",
"192.168.2.107",
"fe80::9468:2cff:fe51:8711",
"fe80::9468:2cff:fe51:8711",
"fe80::74c7:8240:4766:aaf8",
"fe80::4c2a:30f0:cff9:beeb",
"fe80::aede:48ff:fe00:1122"
],
"name" : "ceres-2.local",
"id" : "CB562E90-69DE-5D41-AC64-4EEDC79D5CB0",
"mac" : [
"8c:85:90:ae:b0:b2",
"82:de:c3:e6:d4:05",
"82:de:c3:e6:d4:04",
"82:de:c3:e6:d4:01",
"82:de:c3:e6:d4:00",
"a0:ce:c8:51:95:38",
"82:de:c3:e6:d4:01",
"0e:85:90:ae:b0:b2",
"96:68:2c:51:87:11",
"96:68:2c:51:87:11",
"ac:de:48:00:11:22"
],
"architecture" : "x86_64"
},
"http" : {
"request" : {
"method" : "PUT"
},
"response" : {
"status_code" : 200,
"body" : {
"bytes" : 20
}
},
"version" : "1.1"
},
"event" : {
"ingested" : "2021-06-02T21:46:48.810524200Z",
"timezone" : "-07:00",
"created" : "2021-06-02T21:46:47.629Z",
"kind" : "event",
"module" : "nginx",
"category" : [
"web"
],
"type" : [
"access"
],
"dataset" : "nginx.access",
"outcome" : "success"
},
"user_agent" : {
"original" : "okhttp/3.2.0",
"name" : "okhttp",
"device" : {
"name" : "Other"
},
"version" : "3.2.0"
}
}
},
Looks fine in Discover
So My suggestion is to get this working
nginx log -> Filebeat -> Elasticsearch
Then come back if you want to put logstash in the middle in that case you will just put it as a passthrough look at this thread here
nginx log -> Filebeat -> Logstash -> Elasticsearch