Hi,
I am relatively new to ELK.
I am trying to parse the K8S HAproxy containers logs via GROK inside the filter config in Logstash. My goal is to query haproxy logs fields easily.
On the Kubernetes side, I've deployed filebeat on every pod via K8S yaml deployment succesfully. Data is being sent to Logstash and then forwarded to Elasticsearch. From what I can see from Kibana, filebeats logs are correctly flowing:
I've built multiple pipelines. Each pipeline is structured as follows:
01-input.conf
input {
beats {
port => 5012
ssl => false
ssl_verify_mode => "none"
}
}
(port change for every pipeline)
02-filter.conf
...
if [kubernetes][labels][service] == 'haproxy' {
grok {
match => { "message" => "%{DATE:logdate} %{NUMBER:uriresponde} %{WORD:urimethod} %{URIPATHPARAM:requestedpath} %{WORD:uriversion} %{IP:sourceip} %{PORT:sourceport} %{IP:destip} %{WORD:targeturl} %{NUMBER:firstdigit} %{NUMBER:seconddigit} %{NUMBER:thirddigit} %{NUMBER:fourthdigit} %{NUMBER:fifthdigit} %{WORD:separation} %{WORD:httpstring} %{WORD:anotherurl}
}
...
03-output.conf
output {
if [type] != "container_clone" {
elasticsearch {
template_overwrite => true
index => "logstash_7-%{+YYYY.MM.dd}"
document_id => "%{[@metadata][fingerprint]}"
hosts => ["https://ourmonitorip:9200"]
user => 'logstash'
password => 'logstash'
}
}
else if [type] == "container_clone" {
file {
codec => line { format => "%{message}" }
path => "/var/log/logstash/container/%{+YYYY}/%{+MM}/%{+dd}/%{env}/%{service_name}-ourmonitorip.log"
}
}
}
Here an example of haproxy log row:
'15/Apr/2020:18:52:41.041','200','GET /dev/nameoftheapp/ HTTP/1.1','SO.UR.CE.IP','277','DES.TIN.ATION.IP','dev-nameoofoneappon-default','0','150','0','1','1','--','http:0AF401EBD3080AF4054 ourinternalservice.ourdomain'
From the kibana dashboard, I am not able to query one of the field I declared in grok:
What am I missing ?
Thank you in advance