I'm trying to set up the filebeat system module on a centos 7 host with elastic, logstash, kibana, and filebeat all on the same host. Using the steps from the module quickstart (https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules-quickstart.html) im doing these steps to install the module:
# load the module
filebeat modules enable system
# load index template and dashboards
filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200']
# load the ingest pipeline
filebeat setup --pipelines --modules system -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200']
Then configured logstash to use pipelines like this:
input {
beats {
port => 5044
ssl => false
}
}
output {
if [@metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
pipeline => "%{[@metadata][pipeline]}"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
# debug logstash output
file {
path => "/tmp/logstash-debug.log"
codec => "json_lines"
}
}
I verified that logstash is sending syslog messages:
{"tags":["beats_input_codec_plain_applied"],"input":{"type":"log"},"message":"May 9 19:11:14 localhost su: (to root) centos on pts/0","ecs":{"version":"1.0.0"},"@version":"1","event":{"module":"system","dataset":"system.syslog"},"service":{"type":"system"},"agent":{"version":"7.0.1","hostname":"elastic.screen","id":"bdab0265-a370-4e22-a2e6-e7a74a3a68a5","ephemeral_id":"2a7fc335-5aa5-4351-8f63-d33e1a25a6fb","type":"filebeat"},"@timestamp":"2019-05-10T01:11:23.612Z","host":{"architecture":"x86_64","hostname":"elastic.screen","name":"elastic.screen","id":"c9b3ba37f7bb4987aec72c3230400181","containerized":true,"os":{"kernel":"3.10.0-957.12.1.el7.x86_64","codename":"Core","platform":"centos","family":"redhat","version":"7 (Core)","name":"CentOS Linux"}},"fileset":{"name":"syslog"},"log":{"file":{"path":"/var/log/messages"},"offset":489795}}
The index was created in logstash, and it has documents, but when I look at the documents in ES the "system" fields are empty. If I go to the discover tab or try to view the dashboards they have no data
...
"system" : {
"syslog" : { }
},
...
Does that mean its a problem with the ingest pipeline? Is there something about centos 7 syslog that doesnt parse? Setting up other metricbeat and filebeat modules has been a breeze but ive been struggling with this one
ingest pipelines created by the setup command:
{
"filebeat-7.0.1-system-syslog-pipeline" : {
"processors" : [
{
"grok" : {
"field" : "message",
"patterns" : [
"""%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYMULTILINE:system.syslog.message}""",
"%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}",
"""%{TIMESTAMP_ISO8601:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYMULTILINE:system.syslog.message}"""
],
"pattern_definitions" : {
"GREEDYMULTILINE" : "(.|\n)*"
},
"ignore_missing" : true
}
},
{
"remove" : {
"field" : "message"
}
},
{
"rename" : {
"field" : "system.syslog.message",
"target_field" : "message",
"ignore_missing" : true
}
},
{
"date" : {
"target_field" : "@timestamp",
"formats" : [
"MMM d HH:mm:ss",
"MMM dd HH:mm:ss",
"yyyy-MM-dd'T'HH:mm:ss.SSSSSSZZ"
],
"ignore_failure" : true,
"field" : "system.syslog.timestamp"
}
},
{
"remove" : {
"field" : "system.syslog.timestamp"
}
}
],
"on_failure" : [
{
"set" : {
"field" : "error.message",
"value" : "{{ _ingest.on_failure_message }}"
}
}
],
"description" : "Pipeline for parsing Syslog messages."
}