As soon as I've recreated it, data is processed again and the errors in the log stop appearing.
I was trying to understand where the pipelines are stored/persisted and investigate there, but so far to no avail. And before attempting to solve it by re-building the cluster I am hoping to get some answers here. Any help is greatly appreciated.
the Ingest Pipelines are stored in the cluster-state. For whatever reason,
the cluster-state including these pipelines is not being registered in the ingest node's pipeline store.
I'm still trying to understand and made some progress:
Cluster is setup from scratch; all is fine
If adding all my pipelines, then restarting elasticsearch I'm back to the error:
java.lang.IllegalArgumentException: pipeline with id [xpack_monitoring_2] does not exist
but yet it's there:
curl -XGET 'localhost:9200/_ingest/pipeline/xpack_monitoring_2?pretty'
{
"xpack_monitoring_2" : {
"description" : "2: This is a placeholder pipeline for Monitoring API version 2 so that future versions may fix breaking changes.",
"processors" :
}
}
So: I've started to add my pipelines one by one and found that the error starts as soon as I'm adding a pipeline with a painless-script. Adding the pipeline with the painless-script renders all my pipelines unusable after a restart of the Elasticsearch server ingest node until I delete and re-add all pipelines to the running Elasticsearch server.
I'll try to simplify my setup to then post a sample script & pipeline here.
Additionally this might be related to these topics:
We're still encountering this problem. At least once a week someone on our team runs into this issue and usually re-creating the pipeline resolves it. Occasionally an environment will be more stubborn and will require some combination of node restarts and deletion/recreation of the pipeline to sort things out. Again, we're on 5.1.1 and would love if someone came up with a solution for this (or at least identified/recognized the problem).
This hit me yesterday when I added new nodes to the cluster and decided to just trust a yum update. When I got mixed version I decided to update the other nodes as well. Now none of my pipelines are working, I have bee troublshooting from the filebeat side thinking it was there. But obviously it's not.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.