I set up an ingest pipeline in Dev Tools console with this command
PUT _ingest/pipeline/ams-log-pipeline
{
"processors": [
{
"dissect": {
"field": "message",
"pattern": "%{@timestamp} %{logLevel} %{className} %{messageContent} %{}",
"if": "ctx?.fields?.app_id != null && ctx.fields.app_id != 'ams-kafka-consumer'",
"ignore_failure": true
}
},
{
"dissect": {
"field": "message",
"pattern": "%{logDate} %{logTime} %{logLevel} %{className} %{task} CACHE_MANAGER_RESPONSE %{kafkaTopic} %{statusCode} %{statusString}",
"if": "ctx?.fields?.app_id != null && ctx.fields.app_id == 'ams-kafka-consumer'",
"ignore_failure": true
}
}
],
"on_failure": [
{
"set": {
"description": "Record error information",
"field": "error_information",
"value": "Processor {{ _ingest.on_failure_processor_type }} with tag {{ _ingest.on_failure_processor_tag }} in pipeline {{ _ingest.on_failure_pipeline }} failed with message {{ _ingest.on_failure_message }}"
}
}
]
}
The ingest pipeline appears to let through documents that its 2 processors cannot deal with instead of just failing and letting nothing through - this is good for us.
However, I see shard failures when I run queries in Kibana on documents that were ingested through this pipeline. The failure processor is not populating the field error_information
, which I understand should have details on what went wrong with the 2 Dissect processors.
I know what an illegal argument exception is but in this case I just can't figure out what is throwing it
What do I need to do to fix this and eliminate the shard failures?