Grokparsefailue on haproxy logs (but grok debugger result is OK in kibana...)

Hello,

I have read various similar topics about haproxy logs in logstash but I am still stuck with a grokparsefailure error on my Logstahs output.

My configuration is :
Logstash 7.1.1
HA-Proxy version 1.5.18

Haproxy outputs logs directly into a logstash udp input on port 5514
(I have read this topic about log format changes when directly output from haproxy to logstash : Haproxy format changes when sending directly which helped me a bit, with some adjustments)

When I test my custom pattern, with my grok pattern and my sample data, in the Kibana Grok debugger or in the "https://grokdebug.herokuapp.com/", evrything is fine and my data is correctly parsed.

But when looking at logsatsh output, I always get :

{"host":"10.118.114.216","@version":"1","type":"logs","message":"<134>Jun 27 14:40:19 haproxy[32286]: 10.115.64.169:64808 [27/Jun/2019:14:40:19.059] public kibana/kibana1 56/0/0/1/57 304 303 - - ---- 2/2/1/2/0 0/0 {10.118.114.216|10.172.227.76|fr-FR,fr;q=0.8,en-US;q=0.5,en;q=0.3|http://10.118.114.216/app/kibana|Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0} {text/css; charset=utf-8||must-revalidate|} "GET /built_assets/css/plugins/tagcloud/index.dark.css HTTP/1.1"\n","tags":["_grokparsefailure"],"@timestamp":"2019-06-27T12:40:19.118Z"}

Here are my conf files :
logstash_test.conf

input {
udp {
port => 5514
type => "logs"
}
}
filter {
grok {
patterns_dir => "/etc/logstash/conf.d/patterns"
match => ["message", "%{LOG}"]
}
}
output {
file {
path => "/tmp/logstash_output.txt"
}
}

My pattern file (in /etc/logsatsh/conf.d/patterns) ==> "test_haproxy_pattern.txt"

HAPROXYTIME (?!<[0-9])%{HOUR:haproxy_hour}:%{MINUTE:haproxy_minute}(?::%{SECOND:haproxy_second})(?![0-9])

HAPROXYDATE %{MONTHDAY:haproxy_monthday}/%{MONTH:haproxy_month}/%{YEAR:haproxy_year}:%{HAPROXYTIME:haproxy_time}.%{INT:haproxy_milliseconds}

HAPROXYCAPTUREDREQUESTHEADERS %{DATA:request_header_host}\|%{DATA:request_header_x_forwarded_for}\|%{DATA:request_header_accept_language}\|%{DATA:request_header_referer}\|%{DATA:request_header_user_agent}

HAPROXYCAPTUREDRESPONSEHEADERS %{DATA:response_header_content_type}\|%{DATA:response_header_content_encoding}\|%{DATA:response_header_cache_control}\|%{DATA:response_header_last_modified}

HAPROXYHTTPBASE %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{DATA:captured_request_cookie} %{DATA:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} (\{%{HAPROXYCAPTUREDREQUESTHEADERS}\})?( )?(\{%{HAPROXYCAPTUREDRESPONSEHEADERS}\})?( )?\\"(<BADREQ>|(%{WORD:http_verb} (%{URIPROTO:http_proto}://)?(?:%{USER:http_user}(?::[^@]*)?@)?(?:%{URIHOST:http_host})?(?:%{URIPATHPARAM:http_request})?( HTTP/%{NUMBER:http_version})?))?\\"?

HAPROXYHTTP (?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{SYSLOGPROG}: %{HAPROXYHTTPBASE}

LOG %{SYSLOG5424PRI}%{HAPROXYHTTP}

HAPROXYTCP (?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_queue}/%{INT:time_backend_connect}/%{NOTSPACE:time_duration} %{NOTSPACE:bytes_read} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue}

headers have been added into the haproxy.conf as mentioned in the documentation and various posts.

I would dissect rather than grok to do this. Starting with

    dissect { mapping => { "message" => '<%{pri}>%{ts} %{+ts} %{+ts} %{program}[%{}]: %{host}:%{port} [%{anotherts}] %{} %{backendname}/%{servername} %{} %{} %{} %{} %{} %{} %{} %{} %{headersAndRequest}' } }
    dissect { mapping => { "headersAndRequest" => '{%{requestHeaders}} {%{responseHeaders}} "%{request}"%{}' } }

I haven't typed in all of the field names -- %{} will match but not capture. I'll leave it to you to type in the names so that the data is captured.

Hello,

Thanks for the idea but no.
I have to do it through Grok (there is no reason to give up on grok, considering the grok debugger is OK with my pattern...)

Any other idea on this issue ?

Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.