Summary
When using the Elastic Docker Logging Plugin, applications that already emit structured JSON logs cannot reliably ingest them into Elasticsearch because Docker may split large log lines before they reach the logging plugin. The plugin then wraps each fragment in a separate event under the message field, which makes the JSON impossible to parse downstream.
This breaks a common use case: applications emitting structured JSON logs to stdout.
Logging configuration:
logging:
driver: "elastic/elastic-logging-plugin:9.3.1"
options:
hosts: "http://localhost:9200"
index: "docker-logs"
Application logs are already JSON.
Example application log:
{
"@timestamp": "2026-03-13T18:10:23.326Z",
"log.level": "ERROR",
"message": "Request generated INTERNAL error",
"trace_id": "b24846d3-8b73-4b91-b695-8c4cc9e92b20",
"error.stack_trace": "... large stacktrace ..."
}
Problem
The Docker logging driver may split large stdout lines. When that happens, the elastic logging plugin receives fragments instead of a full log line. Because the original JSON is split across multiple events, downstream ingest pipelines cannot parse the structured log.
Expected behavior
The plugin should detect large splitted JSON logs and merge them with the ECS envelope instead of wrapping them as a string in message.
Actual behavior
Large structured JSON logs become fragmented events and structured logging is effectively broken.
Impact
- rely on fragile ingest pipelines
- accept partial JSON parsing
- switch to alternative architectures