The issue: I have one filter which is responsible to process Nginx error logs. I set it up the same way as the access log filter. But! When "access" filter parsing logs well, error filter just skips logs. There is no _grokparsefailure record in Elasticsearch, no one sign logs coming, when they're absolutely coming.
Configuration file:
filter {
if [type] == "nginx_error" {
grok {
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "%{NGINX_ERROR}" }
named_captures_only => true
}
date {
match => [ "timestamp", "yyyy/MM/dd HH:mm:ss" ]
}
geoip {
source => "nginx_clientip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
if [type] == "nginx_access" {
grok {
add_tag => [ "valid" ]
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "%{NGINX}" }
named_captures_only => true
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss" ]
}
geoip {
source => "nginx_clientip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
if "valid" not in [tags] {
drop { }
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
remove_tag => [ "valid" ]
}
}}
Patterns file in directory /etc/logstash/patterns:
NGINX %{IPORHOST:nginx_clientip} %{USER:nginx_user_ident} %{USER:nginx_user_auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:nginx_method} %{URIPATHPARAM:nginx_request_address}(?: HTTP/%{NUMBER:nginx_http_version})?|-)" %{NUMBER:nginx_response} (?:%{NUMBER:nginx_bytes}|-) "(?:%{URI:nginx_referrer}|-)"%{GREEDYDATA}
NGINX_ERROR %{DATESTAMP:timestamp} \[%{WORD:severity}\] %{INT:pid}\#%{INT:tid}: \*%{INT:cid} %{DATA:log_message}, client: %{IPORHOST:nginx_clientip}, server: (?:%{DATA:nginx_server_name}|), request: "%{WORD:nginx_method} %{DATA:nginx_request_address} HTTP/%{NUMBER:nginx_http_version}", host: "%{IPORHOST:nginx_host}"
The one sign: I always get in Elasticsearch logs this strange message.
Aug 29 12:59:47 elk elasticsearch[2949]: [2016-08-29 12:59:47,187][INFO ][cluster.metadata ] [Urthona] [filebeat-0015.11.11] update_mapping [nginx_error]
And also nginx_error skips a couple of messages to Elasticsearch, they're out of format. Here is example what has been passed.
2016/08/29 13:09:32 [notice] 11473#11473: signal process started
2016/08/16 13:06:58 [emerg] 20772#20772: invalid parameter "http://127.0.0.1:8080" in /etc/nginx/sites-enabled/myhost:23
I busied my brains trying to figure out what's wrong. Please, if anybody faced with this stupid, but strange issue, please, let me know.