[Ingest Node] pipeline with id [x] does not exist (solved)

SOLVED
Just minutes before the release of 5.0.0-alpha4 I found this bug.
I tried it in `5.0.0-alpha4' and it works fine. That's nice. But it still feels like I have lost some time... :weary:

I'm using elasticsearch-5.0.0-alpha3 and kibana-5.0.0-alpha3.

Using Console I register a pipeline:

PUT _ingest/pipeline/x
{
  "description": "Ingesting Meetup events",
  "processors": [ ... ]
}

When I check if my pipeline exists:

GET _ingest/pipeline/x

It comes back the way it should:

{
  "pipelines": [
    {
      "id": "x",
      "config": {
        "description": "x",
        "processors": [ ... ]
      }
    }
  ]
}

But when I want to use it:

PUT a/b/c?pipeline=x
{
  "z": 0
}

The response is:

{
  "error": {
    "root_cause": [
      {
        "type": "illegal_argument_exception",
        "reason": "pipeline with id [x] does not exist"
      }
    ],
    "type": "illegal_argument_exception",
    "reason": "pipeline with id [x] does not exist"
  },
  "status": 400
}

Also tried using curl but I've got the same result.

Is there something I'm missing? Or is this a bug?

1 Like

Not much has changed in ingest in alpha4, so if there is some kind of bug I wouldn't expect it to be fixed by just upgrading.

I tried to reproduce the issue, but it work fine with alpha3 here (all commands return the expected results):

PUT _ingest/pipeline/x
{
  "description": "Ingesting Meetup events",
  "processors": [
    {
      "set" : {
        "field" : "y",
        "value" : 0
      }
    }
  ]
}

GET _ingest/pipeline/x

PUT a/b/c?pipeline=x
{
  "z": 0
}

GET /a/b/c

Did you run into this issue on a one node cluster? Or a different kind of cluster with dedicated ingest nodes and dedicated data+master nodes? Would be great if you're able to reproduce the issue, to see if it is really a bug.

I played around with this weird thing some more. I have a single node cluster. The only configuration different than the default is the cluster name. I'm using the 64 bit Linux TAR.

This morning I tried it again in alpha3, it doesn't work (it couldn't find the pipeline). But I had multiple pipelines registered. I deleted them all, then run this example again. Now it works!

Now I can't break it any more... I registered other pipelines but it all works fine.

Hi,

I am having the same problem using the latest Elasticsearch version.

I create a pipeline which I can GET through the API fine but then when I specify the same pipeline id on my index request I am getting the same error ("pipeline with id [***] does not exist").

Is anyone else not having the same issue? I'm wondering if I'm doing something wrong?

On further investigation and grepping through the source code, it looks like the new pipeline isn't getting stored in the PipelineStore class for some reason. Please, see here

Drilling down to the PipelineStore class it appears that the store isn't updated either because the current and previous states are the same or the current state doesn't contain the new pipeline (or the innerUpdatePipelines() method isn't called at all https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/ingest/PipelineStore.java#LC71)

Any input from the class's commiters?

Thanks

I'm also getting the issue,

I have a pipeline which is working but when I try to update it or add a new one with escaped characters in a grok process the pipeline is ignored

The pipeline I'm adding is:

POST ingest/pipeline/log4net-pipeline-advanced
{
    "description": "Log4net Pipeline Advanced",
    "processors": [
      {
        "grok": {
          "field": "message",
          "patterns": ["(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} %{LOGLEVEL:loglevel} %{DATA:appId} \\[%{DATA:path}\\] \\[%{DATA:referrer}\\] \\[%{DATA:userid}\\] %{DATA:module} %{DATA:method} - %{GREEDYMULTILINE:detail}"],
          "pattern_definitions": {
            "GREEDYMULTILINE": "(.|\n)*"
          }
        },
        "date": {
          "field": "sourceTimestamp",
          "target_field": "timestamp",
          "formats": [
            "yyyy-MM-dd hh:mm:ss,SSS"
          ],
          "timezone": "Europe/London"
        },
        "remove": {
          "field": [
            "sourceTimestamp",
            "message"
          ]
        }
      }
    ]
}

I am having the exact same problem - grok with escape characters in, can GET pipeline just fine,but I get the "pipeline does not exist" error when I try to actually use it, even when calling simulate.

Has anyone figured out the underlying cause and how to avoid it?

1 Like

Is there an actual diagnosis or solution here? I don't see an actual resolution, merely an inability to reproduce. We're seeing this in many of our 5.1.1 single-node clusters. Overwriting the pipeline definition sometimes fixes the issue and sometimes does not.

I have found that simply overwriting an existsing pipeline will fix the problem for all pipelines in the same cluster. so simply GET a pipeline and then PUT it, and the problems will be solved - at least this works on 5.2.x

We've found that this works most of the time. Unfortunately not all of the time.

We've found this issue is not isolated to our pipelines. We're seeing the same problem coming from X-Pack as well:

[{"type":"export_exception","reason":"java.lang.IllegalArgumentException: pipeline with id
xpack_monitoring_2] does not exist","caused_by":{"type":"illegal_argument_exception","reason":"pipeline with id [xpack_monitoring_2] does not exist"}},{"type":"export_exception","reason":"java.lang.Illega
rgumentException: pipeline with id [xpack_monitoring_2] does not exist","caused_by":{"type":"illegal_argument_exception","reason":"pipeline with id [xpack_monitoring_2] does not exist"}},{"type":"export_excep
on","reason":"java.lang.IllegalArgumentException: pipeline with id [xpack_monitoring_2] does not exist","caused_by":{"type":"illegal_argument_exception","reason":"pipeline with id [xpack_monitoring_2] does no
exist"}}]}},"exceptions":[{"type":"export_exception","reason":"failed to flush export bulk [default_local]","caused_by":{"type":"export_exception","reason":"bulk [default_local] reports failures when ex
rting documents","exceptions":[{"type":"export_exception","reason":"java.lang.IllegalArgumentException: pipeline with id [xpack_monitoring_2] does not exist","caused_by":{"type":"illegal_argument_exception
"reason":"pipeline with id [xpack_monitoring_2] does not exist"}},{"type":"export_exception","reason":"java.lang.IllegalArgumentException: pipeline with id [xpack_monitoring_2] does not exist","caused_by":{"
pe":"illegal_argument_exception","reason":"pipeline with id [xpack_monitoring_2] does not exist"}},{"type":"export_exception","reason":"java.lang.IllegalArgumentException: pipeline with id [xpack_monitoring_2
does not exist","caused_by":{"type":"illegal_argument_exception","reason":"pipeline with id [xpack_monitoring_2] does not exist"}}]}}]}}"}
at respond (C:\AddedApps\kibana-5.1.1-windows-x86\node_modules\elasticsearch\src\lib\transport.js:289:15)
at checkRespForFailure (C:\AddedApps\kibana-5.1.1-windows-x86\node_modules\elasticsearch\src\lib\transport.js:248:7)
at HttpConnector. (C:\AddedApps\kibana-5.1.1-windows-x86\node_modules\elasticsearch\src\lib\connectors\http.js:164:7)
at IncomingMessage.wrapper (C:\AddedApps\kibana-5.1.1-windows-x86\node_modules\elasticsearch\node_modules\lodash\lodash.js:4994:19)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)

The problem seem to be multifaceted:

a) if elastic cannot initialize a single pipeline on startup, it doesn't even try the other pipelines - a single problem brings the house down
b) elastic seems to initialize things in a bad order, ie. pipelines before stored scripts - thus pipelines that use store scripts always fail on startup.

On all our elastic clusters we see the following type of error in our elastic logs, on every startup:

[2017-06-28T14:48:31,428][WARN ][o.e.c.s.ClusterService ] [UTf_V3O] failed to notify ClusterStateApplier
org.elasticsearch.ElasticsearchParseException: Error updating pipeline with id [xxx]
at org.elasticsearch.ingest.PipelineStore.innerUpdatePipelines(PipelineStore.java:85) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.ingest.PipelineStore.applyClusterState(PipelineStore.java:68) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.1.jar:5.2.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_112]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_112]
Caused by: org.elasticsearch.ResourceNotFoundException: Unable to find script [xxx] in cluster state
at org.elasticsearch.script.ScriptService.getScriptFromClusterState(ScriptService.java:369) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.script.ScriptService.compileInternal(ScriptService.java:311) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:235) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.ingest.common.ScriptProcessor$Factory.create(ScriptProcessor.java:142) ~[?:?]
at org.elasticsearch.ingest.common.ScriptProcessor$Factory.create(ScriptProcessor.java:88) ~[?:?]
at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:298) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.ingest.ConfigurationUtils.readProcessorConfigs(ConfigurationUtils.java:251) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.ingest.Pipeline$Factory.create(Pipeline.java:122) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.ingest.PipelineStore.innerUpdatePipelines(PipelineStore.java:81) ~[elasticsearch-5.2.1.jar:5.2.1]
... 11 more

1 Like

The same thing happened in my environment(5.2.2).
I don't know the cause of this problem, but share the solution in my environment.
The problem was solved after deleting the elasticsearch data folder.

Not sure if this was ever resolved, i am seeing this in 7.6.2 on RPM version.

1 Like