First i 'm trying to add my HAPROXY ALOHA 13.5 LTS (we cannot install nothing on this appliance, so FileBeat ... not for us , but perhaps i'm wrong ?) syslog to Elasticsearch (Kiban gui).
So i decide to send syslog data , over UDP port 22514.
I have just installing ELK with 8.0 release, Debian 11 (full updated)
Then configure my logstach like that :
cat /etc/logstash/conf.d/haproxy.conf
input {
tcp {
port => 22514
# type => "haproxy"
}
udp {
port => 22514
# type => "haproxy"
}
}
filter {
# if [type] == "haproxy" {
grok {
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "%{DATE_HAPROXY:haproxy_date}%{SPACE}*%{TIME_HAPROXY:haproxy_time}%{SPACE}*%{LOGLEVEL:log-level}%{SPACE}*%{IPORHOST:haproxy_server}%{SPACE}*%{SYSLOGPROG}%{SPACE}*%{PROG:syslog_service}%{SPACE}*%{IP:client_ip}:%{INT:client_port}%{SPACE}*\[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{DATA:captured_request_cookie} %{DATA:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} (\{%{HAPROXYCAPTUREDREQUESTHEADERS}\})?( )?(\{%{HAPROXYCAPTUREDRESPONSEHEADERS}\})?( )?\"(<BADREQ>|(%{WORD:http_verb} (%{URIPROTO:http_proto}://)?(?:%{USER:http_user}(?::[^@]*)?@)?(?:%{URIHOST:http_host})?(?:%{URIPATHPARAM:http_request})?( HTTP/%{NUMBER:http_version})?))?\""}
}
}
#}
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "haproxy-trafic-%{+YYYY.MM.dd}"
user => "elastic"
password => "xxxxxxxxxxxxxxxx"
ssl => true
ssl_certificate_verification => false
}
}
Note : i'm using %{SPACE}* , because i don't know how many space i can found in the log :-(
Note : the grok work perfectly under Grok Debugger (kibana) ... all fields properly displayed
the problem is that when i want to Discover my index, then choose display "JSON", i only see a fields called "message" with all the data ... this not the good at all !!! lol.
(i notice :this thing in Elasticsearch.log : " GrokProcessor [hostanme] regular expression has redundant nested repeat operator * ")
Hello badger,
thank you for those answer.
the same as i can see in Elasticsearch / Kibana.
"message" part is the same fields ....
note : i put it in /tmp/my_output_file.txt , because console not showing something (i don't know where i must see result ..)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.