Windows Event Logs stop indexing after upgrade to Elastic Agent 9.3.4 — system.application, system.system, Sysmon, PowerShell, and Custom Windows logs dropped with HTTP 400 Malformed content / dataset mapping conflict; possible Beats-based serialization regression in 9.3.4
Environment
Elastic Stack: 9.3.4
Elasticsearch / Kibana: 9.3.4
Fleet Server: 9.3.4
Elastic Agent: 9.3.4 on Windows endpoints
System integration: 2.17.0
Windows integration: 3.8.3
Custom Windows Event Log integration: enabled on selected policies
Operating systems: Windows Server 2019 / Windows Server 2022, mixed locales: PL / DE / EN
Deployment model: Fleet-managed Elastic Agents, agents output directly to Elasticsearch over HTTPS. No persistent connectivity issues observed.
According to Elastic documentation, the System integration is responsible for Windows Application, System, and Security event logs, while the Windows integration collects Windows-specific logs such as Sysmon and PowerShell. This matches our configuration.
Summary
After upgrading the Elastic Stack, Fleet Server, Elastic Agents, and integrations to 9.3.4, multiple Windows Event Log datasets stopped indexing.
Affected datasets include:
system.application
system.system
windows.sysmon_operational
windows.powershell / powershell operational
winlog.winlog / Custom Windows Event Log datasets
The notable exception is:
system.security
system.security continues to ingest successfully.
Elastic Agent / Filebeat logs, events from the affected streams are being rejected by Elasticsearch and dropped:
Failed to index N events in last 10s: events were dropped!
Cannot index event ... status=400 ... dropping event!
Symptoms
- HTTP 400 illegal_argument_exception — malformed bulk document content
For multiple Windows Event Log datasets, Filebeat logs show errors like:
Cannot index event ... (status=400):
{
"type": "illegal_argument_exception",
"reason": "Malformed content, found extra data after parsing: END_OBJECT"
}
Example event payload contains:
"event": {
"created": {},
"code": "4098",
"kind": "event",
"provider": "Group Policy Files",
"dataset": "system.application"
}
**This appears related to the official Elastic Agent 9.3.4 known issue where Beats-based integrations incorrectly serialize timestamp fields in the event body as empty {} JSON objects. Elastic documentation specifically mentions event.created: {} and states that this applies to Elastic Agent 9.3.4, with a fix expected in 9.3.5; 9.4.0 is not affected.
Temporary workaround under consideration**
We are considering adding custom ingest pipelines to remove empty timestamp objects, for example:
PUT _ingest/pipeline/fix_agent_934_empty_timestamp_objects
{
"description": "Workaround for Elastic Agent 9.3.4 empty timestamp object bug",
"processors": [
{
"remove": {
"if": "ctx.event != null && ctx.event.created instanceof Map",
"field": "event.created",
"ignore_missing": true
}
},
{
"remove": {
"if": "ctx.event != null && ctx.event.ingested instanceof Map",
"field": "event.ingested",
"ignore_missing": true
}
}
]
}
and then attaching this to affected custom pipelines