I'm using Filebeat and Elasticsearch 7.1.0 to capture logs through the Docker input and parsing with the elasticsearch module with config such as:
filebeat.autodiscover:
providers:
# elasticsearch
- type: docker
labels.dedot: true
templates:
- condition:
contains:
docker.container.image: elasticsearch
config:
- module: elasticsearch
audit:
enabled: true
input:
type: docker
containers.ids:
- "${data.docker.container.id}"
exclude_files: ['\.gz$']
containers.stream: stdout
include_lines: ['"type": "audit"']
This is successfully sending logs from my elasticsearch-based Docker services to my elasticsearch cluster, however I see the following error in all audit records:
"error": {
"message": "field [@timestamp] not present as part of path [elasticsearch.audit.@timestamp]"
}
Looking at the ingest pipeline installed into Elasticsearch by Filebeat (_ingest/pipeline/filebeat-7.1.0-elasticsearch-audit-pipeline), I believe the problem is:
{
"date": {
"field": "elasticsearch.audit.@timestamp",
"target_field": "@timestamp",
"formats": [
"ISO8601"
],
"ignore_failure": true
}
},
{
"remove": {
"field": "elasticsearch.audit.@timestamp"
}
},
Should instead match the similar lines from the server pipeline (_ingest/pipeline/filebeat-7.1.0-elasticsearch-server-pipeline):
{
"date": {
"field": "elasticsearch.server.timestamp",
"target_field": "@timestamp",
"formats": [
"ISO8601"
],
"ignore_failure": true
}
},
{
"remove": {
"field": "elasticsearch.server.timestamp"
}
},
This appears to be a problem in the Filebeat elasticsearch audit pipeline definition.
I think this may also be why I don't see a lot of the logs that have been processed by the elasticsearch module when using the default Filebeat-installed index template and pattern (there is no top-level "@timestamp" field defined for a lot of the documents, which prevents them showing up in the "filebeat-*" queries because that field is specified as the time filter field).
N.B. deprecation and slowlog pipelines both appear to be correct (like server).