Hi!
I've been using Logstash to parse custom logs with great success. What I am having trouble with now, is using that same Logstash and Filebeat configuration to get NGINX logs across. I can get logs to my Kibana instance, the problem is that the fields are named differently to what the Filebeat dashboards are requesting. Metricbeat worked like a charm, nothing was necessary to setup in the Logstash pipeline and it was parsed and plugged without a problem to the dashboards. Filebeat on the other hand is a pain. I'm sending over Syslogs and NGINX, none are getting parsed if I set nothing for them in the Logstash pipeline, but if I setup the pipeline specified here for these modules : https://www.elastic.co/guide/en/logstash/current/logstash-config-for-filebeat-modules.html#parsing-nginx they get parsed, but with different names.
Please help, I've been at this for two days now to no avail. If you need any more information I am happy to provide it.
This is the Logstash pipeline:
input {
beats {
port => 5044
}
stdin { }
}
filter {
if [event][dataset] == "nginx.access" {
} else if [event][name] == "nginx.error" {
} else if [event][dataset] == "apache.access" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
mutate {
add_field => { "[@metadata][service_type]" => "apache-access" }
add_field => { "[@metadata][provider_identifier]" => "%{[clientip]}" }
}
} else if [event][module] == "system" {
mutate {
add_field => { "[@metadata][service_type]" => "system" }
add_field => { "[@metadata][provider_identifier]" => "%{[agent][hostname]}" }
}
} else if[fields][log_type] == "appLog" {
json {
source => "message"
}
mutate {
add_field => { "[@metadata][service_type]" => "appLog" }
add_field => { "[@metadata][provider_identifier]" => "%{[agent][hostname]}" }
replace => { "[@metadata][beat]" => "applog-filebeat" }
}
}
if "_grokparsefailure" in [tags] {
mutate {
replace => { "[@metadata][provider_identifier]" => "grok_parse_failure" }
}
}
if ![@metadata][service_type] {
mutate {
add_field => { "[@metadata][service_type]" => "generic" }
}
}
if ![@metadata][provider_identifier] {
mutate { add_field => { "[@metadata][provider_identifier]" => "%{[agent.hostname]}" } }
}
mutate {
lowercase => [ "[@metadata][provider_identifier]" ]
lowercase => [ "[@metadata][service_type]" ]
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "admin"
index => "%{[@metadata][beat]}-%{[@metadata][version]}-agent-for-%{[@metadata][service_type]}-%{[@metadata][provider_identifier]}"
}
stdout { codec => rubydebug { metadata => true } }
}
And the Filebeat.yml file:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/banka/_general.json
fields:
log_type: appLog
app: SecureBank
app_log_type: general
- type: log
enabled: false
paths:
- /var/log/banka/_global_exception.json
fields:
log_type: appLog
app: SecureBank
app_log_type: global_exception
- type: log
enabled: true
paths:
- /var/log/banka/_revision.json
fields:
log_type: appLog
app: SecureBank
app_log_type: revision
filebeat.config.modules:
path: ${path.config}/modules.d/.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
output.logstash:
hosts: ["localhost:5044"]
#output.elasticsearch:
# hosts: ["localhost:9200"]
#username: "elastic"
#password: "admin"
#setup.kibana:
# host: "localhost:5601"
#username: "elastic"
# password: "admin"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
Thank you to anyone, who took the time reading this.