When attempting to Grok the "message" field in a filebeat pipeline from Kibana I am getting the following error:
{
"docs": [
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "field [message] of type [java.util.ArrayList] cannot be cast to [java.lang.String]"
}
],
"type": "illegal_argument_exception",
"reason": "field [message] of type [java.util.ArrayList] cannot be cast to [java.lang.String]"
}
}
]
}
The sample document I am using is:
[
{
"_source": {
"message": [
"2023-09-29 08:08:16"
]
}
}
]
If I remove the array brackets from the message making it just:
[
{
"_source": {
"message": "2023-09-29 08:08:16"
}
}
]
It works fine and produces the output I want:
{
"docs": [
{
"doc": {
"_index": "_index",
"_id": "_id",
"_version": "-3",
"_source": {
"message": "2023-09-29 08:08:16",
"ftp": {
"access": {
"time": "2023-09-29 08:08:16"
}
}
},
"_ingest": {
"timestamp": "2023-10-01T13:41:46.152104543Z"
}
}
}
]
}
The brackets (array) are default for the filebeat/elasticsearch output/input though so not sure how I would remove the brackets from the filebeat output in order to process them properly.
The Grok pattern is simply:
%{TIMESTAMP_ISO8601:ftp.access.time}
Interesting to note -- the IIS pipeline which is a built-in module processes these messages just fine and my pipeline is basically just copying it with slight modifications (ftp custom fields instead of iis).
Any input would be GREATLY appreciated.
Thanks!