Infoblox - index username to create Dashboard

Hi,

i am new user of the plateform, my teammate have implemented ELK and Kibana to get log from our appliances.
I am creating dashboards but i have noticed that username is not indexed yet.

as a consequence, i can't show on the graph how many connections have been made by each user.

i look on some page and suggesting adding an grok processor in order to extract user name like this : "%{TIME:timestamp}\s%{DATA:server}\s%{DATA:process}: %{DATA:log_message}[%{DATA:username}]:%{GREEDYDATA:additional_info}"

example of one log which i want to get the field : <29>Jul 28 10:26:33 serveur httpd: 2025-07-28 08:26:33.414Z [USERNAME]: Login_Allowed - - to=AdminConnector ip=X.X.X.X auth=LDAP group=GROUP apparently_via=API

In the pipelines, there are a lot of grok processors, i add the last one at the bottom but test not okay, filed is not indexed.

Can you help me?

thank you in advance

regards

Hello @wabd

Welcome to the community!!

Could you confirm if below is your matching log line ?

Jul 28 10:26:33 serveur httpd: 2025-07-28 08:26:33.414Z [USERNAME]: Login_Allowed - - to=AdminConnector ip=X.X.X.X auth=LDAP group=GROUP apparently_via=API

Yes, we can create ingest pipeline & update the default pipeline to extract the USERNAME from the log file.

Have only considered till USERNAME

PUT _ingest/pipeline/login_event_pipeline
{
  "description": "Parse login event logs",
  "processors": [
    {
      "grok": {
        "field": "message",
        "patterns": [
          "%{MONTH} %{MONTHDAY} %{TIME:syslog_time} %{HOSTNAME:host} %{WORD:service}: %{TIMESTAMP_ISO8601:event_timestamp} \\[%{DATA:username}\\]"
        ]
      }
    },
    {
      "date": {
        "field": "event_timestamp",
        "formats": [
          "yyyy-MM-dd HH:mm:ss.SSSX"
        ]
      }
    }
  ]
}
PUT 29july/_settings
{
  "index": {
    "default_pipeline": "login_event_pipeline"
  }
}

POST 29july/_doc
{
  "message" : "Jul 29 10:26:33 serveur httpd: 2025-07-28 08:26:33.414Z [JOHN]: Login_Allowed - - to=AdminConnector ip=X.X.X.X auth=LDAP group=GROUP apparently_via=API"
}

Thanks!!

1 Like

Hi Tortoise,

Thank you for your quick support and response.

Regarding your question: I confirm that the required information is indeed present in the event.original field, and it matches the format you described when I filter using the keyword "Login_Allowed".

When I leave the filter blank, here is the format I see:

@timestamp
Jul 29, 2025 @ 15:01:52.000
@version
1
event.original
<29>Jul 29 13:01:52 server httpd: 2025-07-29 11:01:52.156Z [USER]: Login_Allowed - - to=AdminConnector ip=X.X.X.X auth=LDAP group=GROUP apparently_via=API

And after the rest of the syslog message, i don't know if you need all raw log format to do properly this username index job.

Let me know if you need anything else.

Best regards,

Hello @wabd

Okay so it seems the log is starting with the syslog_priority field, please find below updated pipeline & you can review from your end :

PUT _ingest/pipeline/login_event_pipeline
{
  "description": "Parse login event logs",
  "processors": [
    {
      "grok": {
        "field": "event.original",
        "patterns": [
          "<%{INT:syslog_priority}>%{MONTH} %{MONTHDAY} %{TIME:syslog_time} %{HOSTNAME:host} %{WORD:service}: %{TIMESTAMP_ISO8601:event_timestamp} \\[%{DATA:username}\\]: %{GREEDYDATA:log_details}"
        ]
      }
    },
    {
      "date": {
        "field": "event_timestamp",
        "formats": [
          "yyyy-MM-dd HH:mm:ss.SSSX"
        ]
      }
    }
  ]
}

Thanks!!

Hi Tortoise,

thank you for the answer.

Maybe i add the pattern in the wrong way i got this error for the whole pipeline

"[
  {
    "set": {
      "field": "event.kind",
      "value": "pipeline_error"
    }
  },
  {
    "append": {
      "field": "error.message",
      "value": "Processor '{{{ _ingest.on_failure_processor_type }}}' {{{#_ingest.on_failure_processor_tag}}}with tag '{{{ _ingest.on_failure_processor_tag }}}' {{{/_ingest.on_failure_processor_tag}}}failed with message '{{{ _ingest.on_failure_message }}}'"
    }
  }
]

and when i show the index, i see log with previous fields without any issues.

So when i add it i do in the GUI bellow existant groks and i copy the format of the previous one.

^%{GREEDYDATA:infoblox_nios.log.dns.message}$

the new one ^%{INT:syslog_priority}>%{MONTH} %{MONTHDAY} %{TIME:syslog_time} %{HOSTNAME:host} %{WORD:service}: %{TIMESTAMP_ISO8601:event_timestamp} \\[%{DATA:username}\\]: %{GREEDYDATA:log_details}$

each pattern work individually right ?

I also see that we can add manually field in Kibana, but i'm not familiar with the index script required.

Thank you again for your support.

Hello @wabd

Instead of editing the existing pipeline maybe follow below steps :

  1. Create a new pipeline -new

  2. Try to simulate existing record for this pipeline to see if it is working as per your requirement :

  1. Once you see that the new pipeline is working as expected than you can update the default pipeline to point to this pipeline post which new records will be indexed with username field.

Thanks!!