Just wanting some clarification on what could be causing this issue.
I have Syslogs coming through just fine, but what it comes to the Nginx Logs, nothing is coming through to Elasticsearch.
I believe that it might have to do with the order of my pipelines, but not entirely sure.
Elasticsearch, Kibana, and Logstash are v6.3.1
Filebeat is v1.2.3
My current syslogs are being parsed correctly through 10-syslog-filter, but nothing nginx related is coming through 11-nginx-filter. Granted, I don't have the filter section set, but i'm not even seeing the index appear through Elasticsearch.
Filebeat.yml
-
filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog
- /opt/rails/farad/current/log/*.log
# - /var/log/*.log
document_type: syslog
paths:
- /var/log/nginx/access.log
fields:
nginx: true
fields_under_root: true
document_type: nginx
input_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["hostname:5044"]
bulk_max_size: 1024
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
10-syslog-filter.conf
filter {
if [type] == "syslog" {
mutate {
gsub => ["message",".auth_user_code.=>.\d+.", "auth_user_code=XXXX"]
}
grok {
match => { "message" => "%{SYSLOG5424SD:Time}%{SYSLOG5424SD:Application}%{SYSLOG5424SD:$
remove_field => [ "RemoveMe1" ]
remove_field => [ "RemoveMe2" ]
remove_field => [ "RemoveMe3" ]
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
11-nginx-filter.conf
input {
beats {
port => 5044
host => ["hostname:5044"]
}
}
output {
elasticsearch {
hosts => localhost
manage_template => false
index => "NGINX-%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}