I have setup a ELK stack Proof of Concept and in Kibana I have setup one index pattern (logstash-*
) using one time-field name (@timestamp
) I originally setup the logstash-fowarder on two dev servers sending only logs from /var/log/messages & /var/log/secure. All this works great and all the syslog fields are parsed AND indexed properly. However I configured and added sending apache access logs from one of the two dev servers (/var/log/httpd/access_log) using a grok filter as follows:
filter {
if [type] == "apache-access" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
Here is an example log Apache access_log entry from the server:
192.168.64.232 - - [14/Jul/2015:13:21:30 -0400] "GET / HTTP/1.1" 302 26 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36"
However my problem is that I cannot use any of these fields (clientip
, response
, verb
, etc.) parsed out of my apache access logs to create visualizations as in kibana when I expand an Apache log it says for each field that gets parsed out "No cached mapping for this field, refresh your mapping from the Settings > Indices page" however refreshing the mappings does nothing.
I am on the latest current version of logstash (1.5.2), elasticsearch (1.6), and Kibana (4.1.1)
Here is my logstash-forwarder config on the machine sending Apache logs:
{
"network": {
"servers": [ "logstash.our.domain.com:5000" ],
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",
"timeout": 15
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure"
],
"fields": { "type": "syslog" }
}
],
"files": [
{
"paths": [
"/var/log/httpd/access_log"
],
"fields": { "type": "apache-access" }
}
]
}
Can anyone help me to figure out what is going wrong here, is this a Kibana bug or a problem with my Elastic search configuration or filter or ??
Thank you much in advance!
-Drew