Elastic-agent Custom UDP Logs Log Parsing Field Mapping Issue

I am currently using Elastic-agent to transmit log data to Elasticsearch through Logstash. I have two integrations configured: one for Fortigate and another for Custom UDP Logs.

During the integration with Fortigate, log parsing and field mapping are functioning correctly.

However, with the Custom UDP Logs integration, using the logs-system.security-1.41.0 ingest pipeline, I have encountered an issue where the fields parsed from the logs are not being correctly mapped to Elasticsearch. This has resulted in the data not being parsed properly.

windows event logs event.original :

<13>Aug 07 17:17:53 EDZ-1 AgentDevice=WindowsLog AgentLogFile=Security PluginVersion=WC.MSEVEN6.10.0.1.276 Source=Microsoft-Windows-Security-Auditing Computer=EDZ-Multi-S1 OriginatingComputer=EDZ-Multi-S1 User= Domain= EventID=5152 EventIDCode=5152 EventType=16 EventCategory=12809 RecordNumber=423878501 TimeGenerated=1723022212 TimeWritten=1723022212 Level=LogAlways Keywords=AuditFailure Task=SE_ADT_OBJECTACCESS_FIREWALLPACKETDROPS Opcode=Info Message=Windows 篩選平台已經封鎖封包。 應用程式資訊: 處理程序識別碼: 0 應用程式名稱: - 網路資訊: 方向: 輸入 來源位址: 192.168.3.85 來源連接埠: 9997 目的地位址: 192.168.168.24 目的地連接埠: 50168 通訊協定: 6 篩選器資訊: 篩選器執行階段識別碼: 72151 階層名稱: ICMP 錯誤 階層執行階段識別碼: 28

Logstasg config:

input {
elastic_agent {
port => 5044
ssl_enabled => true
ssl_certificate_authorities => ["/etc/logstash/certs/elasticsearch-ca.pem"]
ssl_certificate => "/etc/logstash/certs/logstash.crt"
ssl_key => "/etc/logstash/certs/logstash.pkcs8.key"
ssl_client_authentication => "required"
}
}

filter {
}

output {
elasticsearch {
hosts => ["https://192.168.3.171:9200"]
data_stream => "true"
user => "elastic"
password => "password"
cacert => "/etc/logstash/certs/elasticsearch-ca.pem"
}
}

What could be causing this issue? Are there any configuration steps I might have missed? I hope someone with experience can provide some guidance or suggestions.

It is not exactly an issue, the ingest pipeline was not built to parse this data, there is no processors to parse it.

The security ingest pipeline expect that Windows events are being collected by Elastic Agent itself, it is the agent that performs locally a parse on the events to create the winlog.* fields, in this case the parse is almost all done on the Agent side.

You would need to get the Windows Events using an Elastic Agent or at least Winlogbeat.

Unfortunately in the way you are getting it you will need to build a custom ingest pipeline that would parse those fields.

I have set up index templates and custom ingest pipelines to parse these fields. However, the index template created for Custom UDP Logs is always 'logs-'. To successfully parse the fields, it should be the 'logs-windows-security-events-' that I established. Do you know where the problem might be? Any advice would be appreciated. Thank you.

I'm not sure, I do not use Elastic Agent to custom logs as I prefer to use Logstash because of the flexibility.

But looking at the configuration you shared there is at least one thing that is wrong, the dataset name.

You are using the dataset name as windows-security-events, but you cannot use this, there is a message bellow the Dataset name that says that you can't use the minus signal (-) in the dataset name.

Also, you are trying to change too many things at the same time, this makes everything harder to troubleshoot and understand what is not working.

You are trying to use a custom parsing, a custom mapping and a custom dataset name.

Another thing is that you enabled the Syslog Parser in the integration, but you are also trying to parse the raw message in your custom pipeline.

My suggestion is that you take some steps back and try to changes things one at the time.

First make sure that you are receiving the logs, unparsed, in the default dataset for the UDP integration, it is udp.generic if I'm not wrong.

After that, change the dataset to something custom, like windows_security_alerts and make sure that the events are arriving in that dataset.

Then enable syslog parsing and check how the message is transformed and what you need to parse.

After that you can create your parse in the custom ingest pipeline and then adjust the mapping.