Ingest pipeline error - ailed to parse with all enclosed parsers

Hello everybody,

I'm recently encountering problem with my ingestion pipeline, where part of if tries to parse a date. If it fails to do so, it should continue and not create the fields time_log and rest_message.
This works for any message that does not contain a date and the log is being parsed.
However, for some datetime formats it crashes and does not event create an entry at all. E.g.
2021-01-12 13:39:28.620+0100 is being parsed and 2021-02-09 22:15:28 is not.
The error message that I'm getting is:
""type":"illegal_argument_exception","reason":"failed to parse date field [2021-02-09 22:15:28] with format [strict_date_optional_time||epoch_millis]","caused_by":{"type":"date_time_parse_exception","reason":"date_time_parse_exception: Failed to parse with all enclosed parsers""

I thought that this case is just ignored, however is seems to parse the date but not to create the entry. How can I make sure that either:

  1. the date is being parsed
  2. the pipeline fails but does create a document
    ?

This is the relevant part of my pipeline:
{
"grok": {
"field": "message",
"patterns": [
"%{TIMESTAMP_ISO8601:time_log:data} %{GREEDYDATA:rest_message}"
],
"ignore_failure" : true
}
},

all the best,
Glenn

Is there more to your pipeline, specifically a date processor?

Hey,

thanks a lot for your answer!
yes, this is the entire pipeline (all the processors):

"processors": [
{
"set": {
"field": "@timestamp",
"value": "{{_ingest.timestamp}}",
"if": "!(ctx.containsKey('@timestamp'))",
"on_failure" : [
{
"set" : {
"field" : "error.message",
"value" : "field "foo" does not exist, cannot rename to "bar""
}
}
]
}
},
{
"grok": {
"field": "message",
"patterns": [
"%{TIMESTAMP_ISO8601:time_log:date} %{GREEDYDATA:rest_message}"
],
"on_failure": [
{
"set": {
"field": "error.parsing",
"value": "Could not parse timestamp of incoming message"
}
}
]
}
},
{
"grok": {
"field": "rest_message",
"patterns": [
"[^{]*RAM used:%{SPACE}%{NUMBER:stats.ram_usage:float}",
"Disk used:%{SPACE}%{NUMBER:stats.disk_usage:float}%",
"SWAP used:%{SPACE}%{NUMBER:stats.swap_usage:float}%",
"CPU usage:%{SPACE}%{NUMBER:stats.cpu_usage:float}%",
"GPU usage:%{SPACE}%{NUMBER:stats.gpu_usage:float}%",
"CPU temp:%{SPACE}%{NUMBER:stats.cpu_temp:float}",
"GPU temp:%{SPACE}%{NUMBER:stats.gpu_temp:float}",
"Board temp: %{NUMBER:stats.board_temp:float}",
"Case Temp: %{NUMBER:stats.case_temp:float}",
"Case Humidity: %{NUMBER:stats.case_humidity:float}%",
"Router Temperature: %{NUMBER:stats.router_temp:float}°C",
"Signal Strength: %{NUMBER:stats.signal:float}dB",
"STATS: Video - %{NUMBER:fps.video:int} FPS; ANPR - %{NUMBER:fps.anpr:int} FPS; Vehicle Detection - %{NUMBER:fps.yolo:int} FPS",
"DIRECTION: %{GREEDYDATA:detection.direction:string}",
"ACCURACY: %{NUMBER:detection.accuracy:float}",
"DETECTIONS: %{NUMBER:detection.number:integer}"
],
"on_failure": [
{
"set": {
"field": "warning.parsing",
"value": "No matching pattern was found."
}
}
]
}
},
{
"date": {
"field": "time_log",
"target_field": "time_log",
"formats": [
"yyyy-MM-dd HH:mm:ss.SSSZ"
],
"on_failure": [
{
"set": {
"field": "error.parsing",
"value": "Date could not be parsed"
}
}
]
}
},
{
"remove": {
"field": "rest_message",

            "on_failure": [
                {
                    "set": {
                        "field": "error.parsing",
                        "value": "Date could not be parsed"
                    }
                }
            ]
        }
    }
]

all the best,
Glenn

Thanks.
It's better if you can format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

It looks like this is the issue;

As that does not match 2021-02-09 22:15:28.
You should add another pattern in the formats field that matches that, and you should be fine.

thanks!
I will try this as soon as possible. Apart from that: is there any way to catch the error or notify me?
So let's say there will be another unsupported time format.
I thought that the error would be caught by the on_failure section?

all the best,
Glenn

I'm not super familiar with that, but I would imagine if Elasticsearch cannot parse the timestamp then it can't do much else other than reject it.

You might be able to just dump the event into another error index though?