Help with Logstash filter

Hi,
I'm new to ELK and Grok. I'm using ES 7.7 and logstash 7.9. Application logs are coming from Filebeat > Logstash > ES. My logs are being indexed into ES and viewable from Kibana. Issue is my logstash filter is not creating the fields that I am parsing from my filter. Any help is appreciated.
Here is my LS filter:

filter {

  if [log.file.path] =~ /fluidtopics/ {
    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601:date} %{LOGLEVEL:loglevel}\s+ %{DATA:session} %{DATA:tenant} - %{GREEDYDATA:logger}"
      }
    }
  }
}

Here is a log event:

2020-10-12 13:37:11,531 DEBUG   no-session unknown-tenant - net.antidot.fluidtopics.server.spring.filters.RouterIOLoggerFilter - Outgoing response: GET /70055/rc/api/health (18ms) -> 200

Here is the document view from Kibana:

{
  "_index": "ct-applogs-2020.10.13",
  "_type": "_doc",
  "_id": "Z6tMH3UB_FGgcNWrW-ET",
  "_version": 1,
  "_score": null,
  "_source": {
    "cloud": {
      "account": {
        "id": "774962502422"
      },
      "instance": {
        "id": "i-08b9707a1facfffe0"
      },
      "region": "us-west-2",
      "provider": "aws",
      "machine": {
        "type": "r4.large"
      },
      "availability_zone": "us-west-2a",
      "image": {
        "id": "ami-f191c989"
      }
    },
    "tags": [
      "fluidtopics",
      "beats_input_codec_plain_applied"
    ],
    "ecs": {
      "version": "1.5.0"
    },
    "@timestamp": "2020-10-13T00:11:38.027Z",
    "log": {
      "offset": 29359345,
      "file": {
        "path": "/usr/local/afs7/logs/daemon/fluidtopics.log"
      }
    },
    "@version": "1",
    "message": "2020-10-12 17:11:31,351 DEBUG   no-session unknown-tenant - net.antidot.fluidtopics.server.spring.filters.RouterIOLoggerFilter - Outgoing response: GET /70055/rc/api/health (18ms) -> 200 ",
    "host": {
      "ip": [
        "10.204.17.7"
      ],
      "name": "doc-pre-front01",
      "os": {
        "name": "Red Hat Enterprise Linux Server",
        "kernel": "3.10.0-1127.19.1.el7.x86_64",
        "version": "7.8 (Maipo)",
        "platform": "rhel",
        "family": "redhat",
        "codename": "Maipo"
      },
      "architecture": "x86_64",
      "mac": [
        "02:6b:7b:10:e2:9a"
      ],
      "containerized": false,
      "hostname": "doc-pre-front01",
      "id": "575e1f0aeee74783a22019de88a2666f"
    },
    "agent": {
      "name": "doc-pre-front01",
      "version": "7.9.2",
      "type": "filebeat",
      "hostname": "doc-pre-front01",
      "ephemeral_id": "c51f71bd-3fbc-4fbf-89ab-c93cc6dd9660",
      "id": "24b9e528-f86d-45ce-b776-c190c0e7be81"
    },
    "input": {
      "type": "log"
    }
  },
  "fields": {
    "@timestamp": [
      "2020-10-13T00:11:38.027Z"
    ]
  },
  "sort": [
    1602547898027
  ]
}

In logstash you have to refer to a field inside a field inside a field as

if [log][file][path] =~ ...

logstash supports periods in field names, so it needs a different way to refer to nested fields.

elasticsearch initially supported periods in field names, then stopped doing so (2.x?) because of the ambiguity it introduces. Is log.file.path a reference to the path field inside a field called log.file, or a reference to the file.path field inside a field called log?

In logstash, [log][file.path] and [log.file][path] are clearly different things. In Kibana I am not sure how folks tell them apart.

Then elasticsearch started doing so again (5.x?). I think that was because they improved the parsers to disambiguate the error messages, but I could be wrong.

Thank you @Badger This fixed the issue!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.