Recently I've deployed FileBeat in two remote servers for gathering log messages and send to the logstash on the local server.
The connection between FireBeat and Logstash is perfect. The messages have been transferring since then.
Although, it seems like the filter doesn't work. I still get the same pattern before using filter.
There're no any error message.
filebeat.inputs:
- type: log
enabled: true
paths:
- "/usr/logs/wso2/http_access*.*log"
tags: ["wso2"]
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
output.logstash:
hosts: ["x.x.x.x:5000"]
and the pipeline configuration
input { beats { port => 5000 } }
filter {
mutate { add_field => { "test" => "I'm testing Grok filer" }}
if "tomcat" in [tags] {
mutate { add_field => { "host_ip" => "x.x.x.x" }}
mutate { add_field => { "hostname" => "test.com" }}
grok { match => { "message" => "%{IPORHOST:client} %{DATA} %{DATA:user} \[%{DATA:logtimestamp} %{ISO8601_TIMEZONE:timezone}\] \"%{WORD:method} %{URIPATH:uri_path}(%{URIPARAM:params}|) %{DATA:protocol}\" %{NUMBER:code} (%{NUMBER:bytes}|%{DATA}) %{NUMBER:response_time_sec}"}
overwrite => [ "message" ] }
mutate { add_field => { "respones_time" => "%{response_time_sec}" } }
date { match => [ "logtimestamp", "dd/MMM/yyyy:HH:mm:ssZ"]
target => "logtimestamp" } }
if "wso2" in [tags] {
....
}}
output {
elasticsearch {
hosts => ["http:/x.x.x.x:9200"]
user => "elastic"
password => "changeme"
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
As you might notice, I've added the simplest 'test'
field, but also not show in the output