Log fields are not parsing as expected using grok filter

Hi All,

I am using below configuration and expecting it to get parse as per below fields.

below is the pipeline.conf file.

input {
    beats {
        port => "5044"
filter {
    grok {
        match => { "message" => "%{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:%{MINUTE}(?::?%{SECOND})\| %{USERNAME:exchangeId}\| %{DATA:trackingId}\| %{NUMBER:RoundTrip:int}%{SPACE}ms\| %{NUMBER:ProxyRoundTrip:int}%{SPACE}ms\| %{NUMBER:UserInfoRoundTrip:int}%{SPACE}ms\| %{DATA:Resource}\| %{DATA:subject}\| %{DATA:authmech}\| %{DATA:scopes}\| %{IPV4:Client}\| %{WORD:method}\| %{DATA:Request_URI}\| %{INT:response_code}\| %{DATA:failedRuleType}\| %{DATA:failedRuleName}\| %{DATA:APP_Name}\| %{DATA:Resource_Name}\| %{DATA:Path_Prefix}"} 
    geoip {
        source => "Client"
output {
    elasticsearch {
        hosts => [ "localhost:9200" ]

actual logs are like -

2021-06-25T08:51:38,788| ETxatokABfg2U2wVXx2ww| atid:1b9mgcaaCgpwrcgE1FLBAiF88mk| 270 ms| 212 ms| 0 ms| api.dev.only.bfco.io [] / /*:443| | OAuth| || POST| /piie-aiip/v5/aiip/account-success-constant| 201| | | Ba API| Root Resource| /* 

2021-06-25T13:02:41,254| 3rURHHJEh936dQEBMx-6yA| atid:x6UY50zGPx2L_qZmFm251FkQDiU| 160 ms| 8 ms| 0 ms| api.dev.only.bfco.io [] / /*:443| | OAuth| || GET| /piie-aiip/v5/aiip/account-success-constant/97e7a7b9-3e60-4508-a35b-d0a01ba902bb| 200| | | Ba API| Root Resource| /* 

2021-06-25T13:03:51,257| P0nH46kGVFnhZZ5iC6ZU1g| atid:y7UX49zBPy1P_wXnFm251FkQDiU| 39 ms| 2 ms| 0 ms| api.dev.only.bfco.io [] / /*:443| | OAuth| || GET| /piie-aiip/v5/aiip/account-success-constant-asu| 400| | | Ba API| Root Resource| /* 

but when checked on kibana, i am seeing message field as one field (below) and not separated into above individual fields as mentioned in pipeline.conf.

below from Expanded document - Table.

message         2021-06-25T13:01:03,478| XQvIx-qYtp2lP0tLcr53pQ| 
                atid:y7UX99zGPx2L_qZmFm101EkQBiU| 180 ms| 10 ms| 0 ms| api.dev.only.bfco.io 
                [] / /*:443| | OAuth| || POST| /piie-aiip/v5/aiip/account-success- 
                constant| 201| | | Ba API| Root Resource| /* 

I wanted to run query on one of the field (NUMBER:RoundTrip:int)

When checked above in Grok Debugger it is showing the expected results.

i.e put sample data as above one of the log line,
Used above grok Patten,
it is giving below output.

  "response_code": "201",
  "method": "POST",
  "subject": "",
  "Request_URI": "/piie-aiip/v5/aiip/account-success-constant",
  "Resource": "api.dev.only.bfco.io [] / /*:443",
  "UserInfoRoundTrip": 0,
  "APP_Name": "Ba API",
  "authmech": "OAuth",
  "Resource_Name": "Root Resource",
  "failedRuleName": "",
  "exchangeId": "XEyIx-yYtp2lP0kLcr08kP",
  "RoundTrip": 180,
  "ProxyRoundTrip": 10,
  "scopes": "",
  "Client": "",
  "Path_Prefix": "",
  "failedRuleType": "",
  "trackingId": "atid:y8YX50zGPx3L_qZmg981FkEBiU"

1 Q. Can someone please advise what is wrong in pipeline.conf preventing message field to be individual fields.

2 Q. also not sure why below fields are showing as long in Index Patterns when it is passed as int.

Thanks in Advance.

When I use that grok filter it does create all the fields. The [message] field will not be modified. grok creates the additional fields.

Thank you for prompt response. I missed to see it.
Any idea about the 2 Q.

The rules for dynamic mapping turn an integer into a long. If you want a 32-bit integer rather than a 64-bit long you would have to set the mapping with an index template.

Ok. Thanks.

I thought what ever we are setting via above pipeline.conf is explicit setting for data types which will overwrite elasticsearch's dynamic mapping and hence asked why even after explicitly specifying data type as int, it turned into long.

logstash has int and float. elasticsearch has long, integer, byte, float, double. So someone had to decide what to map an int to. Since a logstash int might not fit in an elasticsearch integer it makes more sense to map it to long.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.