I am in the process of migrating from self-managed beats to Elastic Agent. I'm currently monitoring an application log in filebeat, which I'm migrating to a Custom Log in Elastic Agent. The log is ingesting as expected. My Elasticsearch Ingest Pipeline contains the following processors:
{
"description": "Process MyApp logs",
"processors": [
{
"rename": {
"description": "Set event.created",
"field": "@timestamp",
"target_field": "event.created",
"ignore_failure": true
}
},
{
"set": {
"description": "Set event.ingested",
"field": "event.ingested",
"value": "{{ _ingest.timestamp }}",
"ignore_failure": true
}
}
]
}
This works fine with filebeat and the processor runs successfully when using it with Elastic Agent. However, when I try to view these logs after ingesting them from Elastic Agent, I get the following error in Kibana:
{
"took": 1802,
"timed_out": false,
"_shards": {
"total": 1260,
"successful": 1259,
"skipped": 1230,
"failed": 1,
"failures": [
{
"shard": 0,
"index": ".ds-logs-my_app-default-2022.08.10-000001",
"node": "jYPwOQtFQZ-mXZpqHFCn5Q",
"reason": {
"type": "illegal_argument_exception",
"reason": "error fetching [event.created]: Field [event.created] of type [keyword] doesn't support formats.",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Field [event.created] of type [keyword] doesn't support formats."
}
}
}
]
},
"hits": {
"max_score": null,
"hits": []
}
}
I do not receive this error when viewing logs ingested through Filebeat. When I go in and look at the index definition, I see that both event.created
and event.ingested
are defined in the index as type keyword
. According to the Elastic Common Schema, they should be type date
.
I'm placing this here because it seems to be an issue with either Elastic Agent itself or with the templates that are being loaded into Kibana, rather than a Kibana issue.
Is this something I'm doing wrong, or is this a bug?