i am new user of the plateform, my teammate have implemented ELK and Kibana to get log from our appliances.
I am creating dashboards but i have noticed that username is not indexed yet.
as a consequence, i can't show on the graph how many connections have been made by each user.
i look on some page and suggesting adding an grok processor in order to extract user name like this : "%{TIME:timestamp}\s%{DATA:server}\s%{DATA:process}: %{DATA:log_message}[%{DATA:username}]:%{GREEDYDATA:additional_info}"
example of one log which i want to get the field : <29>Jul 28 10:26:33 serveur httpd: 2025-07-28 08:26:33.414Z [USERNAME]: Login_Allowed - - to=AdminConnector ip=X.X.X.X auth=LDAP group=GROUP apparently_via=API
In the pipelines, there are a lot of grok processors, i add the last one at the bottom but test not okay, filed is not indexed.
Regarding your question: I confirm that the required information is indeed present in the event.original field, and it matches the format you described when I filter using the keyword "Login_Allowed".
When I leave the filter blank, here is the format I see:
and when i show the index, i see log with previous fields without any issues.
So when i add it i do in the GUI bellow existant groks and i copy the format of the previous one.
^%{GREEDYDATA:infoblox_nios.log.dns.message}$
the new one ^%{INT:syslog_priority}>%{MONTH} %{MONTHDAY} %{TIME:syslog_time} %{HOSTNAME:host} %{WORD:service}: %{TIMESTAMP_ISO8601:event_timestamp} \\[%{DATA:username}\\]: %{GREEDYDATA:log_details}$
each pattern work individually right ?
I also see that we can add manually field in Kibana, but i'm not familiar with the index script required.
Once you see that the new pipeline is working as expected than you can update the default pipeline to point to this pipeline post which new records will be indexed with username field.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.