Filebeat 6.6 system module fields not being exported(or parsed)

Hi,

I have installed filebeat and enabled the system module, I have also loaded the ingest pipelines manually as the filebeat output is being sent to logstash and to ES from there. The system dashboards were working earlier but have stopped doing so, the message field is not being parsed to JSON. Here are the before(working) and after(not working) JSON messages.

When it was working:

{
  "_index": "filebeat-6.6.0-2019.02.19",
  "_type": "doc",
  "_id": "AChrBGkBWUhMufoBrD1g",
  "_version": 1,
  "_score": null,
  "_source": {
    "offset": 67858,
    "log": {
      "file": {
        "path": "/var/log/auth.log"
      }
    },
    "prospector": {
      "type": "log"
    },
    "source": "/var/log/auth.log",
    "fileset": {
      "module": "system",
      "name": "auth"
    },
    "input": {
      "type": "log"
    },
    "@timestamp": "2019-02-19T06:22:28.000Z",
    "system": {
      "auth": {
        "hostname": "ip-xxxxxxxxxxxx",
        "sudo": {
          "tty": "pts/1",
          "pwd": "/home/ubuntu",
          "user": "root",
          "command": "/bin/echo Hay! 4.0"
        },
        "user": "ubuntu",
        "timestamp": "Feb 19 06:22:28"
      }
    },
    "meta": {
      "cloud": {
        "machine_type": "t2.micro",
        "availability_zone": "ap-south-1a",
        "instance_id": "i-xxxxxxxxxxxxxx",
        "provider": "ec2",
        "region": "ap-south-1"
      }
    },
    "host": {
      "os": {
        "codename": "bionic",
        "name": "Ubuntu",
        "family": "debian",
        "version": "18.04.1 LTS (Bionic Beaver)",
        "platform": "ubuntu"
      },
      "containerized": false,
      "name": "ip-xxxxxxxxxxxxxx",
      "id": "919577b5e29b45cdb2d",
      "architecture": "x86_64"
    },
    "beat": {
      "hostname": "ip-xxxxxxxxxxxxxxxxx",
      "name": "ip-xxxxxxxxxxxxxxxxx",
      "version": "6.6.0"
    },
    "event": {
      "dataset": "system.auth"
    }
  },
  "fields": {
    "@timestamp": [
      "2019-02-19T06:22:28.000Z"
    ]
  },
  "sort": [
    1550557348000
  ]
}

When it stopped working:

{
  "_index": "filebeat-6.6.0-2019.02.21",
  "_type": "doc",
  "_id": "Fd5nD2kBA9gNjQYzof-z",
  "_version": 1,
  "_score": null,
  "_source": {
    "message": "Feb 21 09:33:55 ip-172-31-29-94 sudo:     root : TTY=pts/1 ; PWD=/var/log ; USER=root ; COMMAND=/bin/echo Hay version 8.0!",
    "input": {
      "type": "log"
    },
    "log": {
      "file": {
        "path": "/var/log/auth.log"
      }
    },
    "beat": {
      "name": "ip-xxxxxxxxxxxxx",
      "version": "6.6.0",
      "hostname": "ip-xxxxxxxxxxxxx"
    },
    "host": {
      "os": {
        "name": "Ubuntu",
        "version": "18.04.1 LTS (Bionic Beaver)",
        "codename": "bionic",
        "platform": "ubuntu",
        "family": "debian"
      },
      "name": "ip-xxxxxxxxxxxxx",
      "containerized": false,
      "id": "919577b5e29b45c360264e490",
      "architecture": "x86_64"
    },
    "tags": [
      "beats_input_codec_plain_applied"
    ],
    "@version": "1",
    "meta": {
      "cloud": {
        "instance_id": "i-05014b6e3321",
        "machine_type": "t2.micro",
        "region": "ap-south-1",
        "provider": "ec2",
        "availability_zone": "ap-south-1a"
      }
    },
    "fileset": {
      "module": "system",
      "name": "auth"
    },
    "event": {
      "dataset": "system.auth"
    },
    "offset": 103306,
    "prospector": {
      "type": "log"
    },
    "@timestamp": "2019-02-21T09:34:02.590Z",
    "source": "/var/log/auth.log"
  },
  "fields": {
    "@timestamp": [
      "2019-02-21T09:34:02.590Z"
    ]
  },
  "highlight": {
    "message": [
      "Feb 21 09:33:55 ip-172-31-29-94 sudo:     root : TTY=pts/1 ; PWD=/var/log ; USER=root ; COMMAND=/bin/@kibana-highlighted-field@echo@/kibana-highlighted-field@ Hay version 8.0!"
    ]
  },
  "sort": [
    1550741642590
  ]
}

Apart from the message string not being parsed, one of the other differences I see is that the latter JSON has a tag called "beats_input_codec_plain_applied".

Thanks,
Chris

Could you please share your Logstash configuration formatted using </>? Also, a few example logs which can be parsed and a few ones which cannot be parsed might also be helpful.

Chris,

I just spent a lot of time trying to figure basically the same issue on 6.6. After a lot of fruitless troubleshooting I finally realized the issues plaguing me were only related to a couple of things:

  1. Kibana was not to same version as rest of the system (6.4.2 instead of 6.6 like rest)
  2. Index templates were not setup/loaded correctly. Make sure your field mappings are present for the corresponding template if you GET _cat/templates. You can also view a list of your templates with GET /_cat/templates?v&s=name

You need to ensure to push the templates to ES with the following Filebeat settings:

Please read up on these settings before you deploy them. You risk overwriting your existing index templates, field mappings, etc

#setup.kibana:
#  host: "<kibanahost>:5601"
#  username: "user"  
#  password: "pass"


setup.dashboards.enabled: true
setup.dashboards.index: syslog-*
setup.template.enabled: true
#setup.template.overwrite: true
setup.template.name: "syslog"
setup.template.pattern: "syslog-*"
#setup.template.settings:
#  index.number_of_shards: 1
#  index.number_of_replicas: 1

Once resolved, I had to update the Dashboard Indices and I was good to go.

@CoreyDinkens Thanks for the input, I tried setting up the dashboards as mentioned, it did not make any difference though.

@kvch This is my logstash configuration

input {
  beats {
    port => 5044
  }
}
  output {
    elasticsearch {
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    }
}

Unfortunately, I don't have the logs that were being parsed correctly, however, I have the json which I have uploaded in my first post.

Please do let me know if there is any other info that you may need. Thanks.

@ChrisOdney Apologies if I was not completely clear above; loading the Dashboards is really the last thing to do once everything else is working properly.

I was really suggesting that you ensure your index templates + field mappings have been setup properly. I believe that is the key to your fields being expanded and mapped properly.

@CoreyDinkens here is my output to the two queries you mentioned in your earlier reply.

GET _cat/templates

.watch-history-9              [.watcher-history-9*]      2147483647 
metricbeat-6.6.1              [metricbeat-6.6.1-*]       1          
.monitoring-kibana            [.monitoring-kibana-6-*]   0          6050399
.watches                      [.watches*]                2147483647 
.ml-notifications             [.ml-notifications]        0          6060099
apm-6.6.0                     [apm-6.6.0-*]              1          
.monitoring-alerts            [.monitoring-alerts-6]     0          6050399
security-index-template       [.security-*]              1000       
.triggered_watches            [.triggered_watches*]      2147483647 
.ml-state                     [.ml-state]                0          6060099
.monitoring-logstash          [.monitoring-logstash-6-*] 0          6050399
security_audit_log            [.security_audit_log*]     1000       
logstash-index-template       [.logstash]                0          
kibana_index_template:.kibana [.kibana]                  0          
.ml-anomalies-                [.ml-anomalies-*]          0          6060099
.ml-meta                      [.ml-meta]                 0          6060099
logstash                      [logstash-*]               0          60001
.monitoring-es                [.monitoring-es-6-*]       0          6050399
.ml-config                    [.ml-config]               0          6060099
filebeat-6.6.0                [filebeat-6.6.0-*]         1          
.monitoring-beats             [.monitoring-beats-6-*]    0          6050399

Output for GET /_cat/templates?v&s=name

name                          index_patterns             order      version
.ml-anomalies-                [.ml-anomalies-*]          0          6060099
.ml-config                    [.ml-config]               0          6060099
.ml-meta                      [.ml-meta]                 0          6060099
.ml-notifications             [.ml-notifications]        0          6060099
.ml-state                     [.ml-state]                0          6060099
.monitoring-alerts            [.monitoring-alerts-6]     0          6050399
.monitoring-beats             [.monitoring-beats-6-*]    0          6050399
.monitoring-es                [.monitoring-es-6-*]       0          6050399
.monitoring-kibana            [.monitoring-kibana-6-*]   0          6050399
.monitoring-logstash          [.monitoring-logstash-6-*] 0          6050399
.triggered_watches            [.triggered_watches*]      2147483647 
.watch-history-9              [.watcher-history-9*]      2147483647 
.watches                      [.watches*]                2147483647 
apm-6.6.0                     [apm-6.6.0-*]              1          
filebeat-6.6.0                [filebeat-6.6.0-*]         1          
kibana_index_template:.kibana [.kibana]                  0          
logstash                      [logstash-*]               0          60001
logstash-index-template       [.logstash]                0          
metricbeat-6.6.1              [metricbeat-6.6.1-*]       1          
security-index-template       [.security-*]              1000       
security_audit_log            [.security_audit_log*]     1000

As far as my understanding goes, templates help in settings and mappings for indices.The problem I am facing here is that a message string is not being parsed into JSON which I think is more related to ES ingestion.

Thanks,
Chris.

@kvch Any ideas what I am doing wrong here? I am out of options and I am stuck on this for a while now.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.