Issue with logstash then elasticsearch : message (from grok) not creating fields?

First i 'm trying to add my HAPROXY ALOHA 13.5 LTS (we cannot install nothing on this appliance, so FileBeat ... not for us , but perhaps i'm wrong ?) syslog to Elasticsearch (Kiban gui).
So i decide to send syslog data , over UDP port 22514.

I have just installing ELK with 8.0 release, Debian 11 (full updated)

Then configure my logstach like that :


cat /etc/logstash/conf.d/haproxy.conf 

input {
  tcp {
    port => 22514
#    type => "haproxy"
  }
  udp {
    port => 22514
#    type => "haproxy"
  }
}

filter {
#  if [type] == "haproxy" {
    grok {
      patterns_dir => "/etc/logstash/patterns"
      match => { "message" => "%{DATE_HAPROXY:haproxy_date}%{SPACE}*%{TIME_HAPROXY:haproxy_time}%{SPACE}*%{LOGLEVEL:log-level}%{SPACE}*%{IPORHOST:haproxy_server}%{SPACE}*%{SYSLOGPROG}%{SPACE}*%{PROG:syslog_service}%{SPACE}*%{IP:client_ip}:%{INT:client_port}%{SPACE}*\[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{DATA:captured_request_cookie} %{DATA:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} (\{%{HAPROXYCAPTUREDREQUESTHEADERS}\})?( )?(\{%{HAPROXYCAPTUREDRESPONSEHEADERS}\})?( )?\"(<BADREQ>|(%{WORD:http_verb} (%{URIPROTO:http_proto}://)?(?:%{USER:http_user}(?::[^@]*)?@)?(?:%{URIHOST:http_host})?(?:%{URIPATHPARAM:http_request})?( HTTP/%{NUMBER:http_version})?))?\""}
    }
  }
#}

output {
  elasticsearch {
    hosts => "127.0.0.1:9200"
    index => "haproxy-trafic-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "xxxxxxxxxxxxxxxx"
    ssl => true
    ssl_certificate_verification => false
  }
}

Note : i'm using %{SPACE}* , because i don't know how many space i can found in the log :-( 

Note : the grok work perfectly under Grok Debugger (kibana) ... all fields properly displayed

Here is my patterns

cat /etc/logstash/patterns/haproxy 

DATE_HAPROXY %{YEAR}-%{MONTHNUM}-%{MONTHDAY}
TIME_HAPROXY %{HOUR}:%{MINUTE}:%{SECOND}

the problem is that when i want to Discover my index, then choose display "JSON", i only see a fields called "message" with all the data ... this not the good at all !!! lol.

(i notice :this thing in Elasticsearch.log : " GrokProcessor [hostanme] regular expression has redundant nested repeat operator * ")

What does a message look like if you use

output { stdout { codec => rubydebug } }

BTW, the SPACE pattern in grok is \s*, so you do not need the * after %{SPACE}.

Also, do you really want to do this in logstash, or using a grok processor in an ingest pipeline in Elasticsearch?

Hello badger,
thank you for those answer.
the same as i can see in Elasticsearch / Kibana.
"message" part is the same fields ....
note : i put it in /tmp/my_output_file.txt , because console not showing something (i don't know where i must see result ..)

same issue while viewing output via local file :slight_smile:

<134>haproxy[14880]: 192.134.152.190:25145 [18/Feb/2022:14:59:29.400] web-service-https~ Internet-prod/back1.local.intra 0/0/0/17/48 200 89837 - - ---- 272/272/1/1/0 0/0 {www.toto.fr} \"GET /core/assets/vendor/jquery/jquery.min.js?v=3.5.1 HTTP/1.1\" 

Version with no custom patterns ...

%{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{DATA:captured_request_cookie} %{DATA:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} \{%{HAPROXYCAPTUREDREQUESTHEADERS}\} \\"%{WORD:Method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version}\\" 

This working under ALL grok debugger (elk, online etc) , but not for logstatsh.

i do not understand logstach grok ...

Always got : "_grokparsefailure" in ELK kibana (Discover)
while grok is ok.

This

input { generator { count => 1 lines => [ '<134>haproxy[14880]: 192.134.152.190:25145 [18/Feb/2022:14:59:29.400] web-service-https~ Internet-prod/back1.local.intra 0/0/0/17/48 200 89837 - - ---- 272/272/1/1/0 0/0 {www.toto.fr} "GET /core/assets/vendor/jquery/jquery.min.js?v=3.5.1 HTTP/1.1"' ] } }
filter {
    grok { match => { "message" => '%{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{DATA:captured_request_cookie} %{DATA:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} {%{HAPROXYCAPTUREDREQUESTHEADERS}} "%{WORD:Method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version}"' } }
}

works just fine in logstash, which suggests the problem is with the escapes.

yes, sure.
But when output to file , i have discovered that in /tmp/my_output_file.txt (tail -f -n 100 the file)
there was

\"GET /

Not

"GET /

so need to protect it :

\\"%{WORD:Method}

How is the file output configured? It may be escaping the quotes itself.

it's work , that was just because i put " (double quote) just replace with ' (simple quote)

match => { "message" => "%{ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx "}

Then by :slight_smile:

match => { "message" => '%{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '}

that 's it. (we need to escape , because syslog from HAPROXY ALOHA , have the

\"GET)

thank u badger.

But : do you think we must work with FileBeat ? (install it on ELK server), on ALOHA HAPROXY we can not install fileBeat.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.