New index not created by ingestion pipeline

Hi
I am using elastic-agent in k8s cluster with kubernetes integration. I have added a custom pipeline for kubernetes container logs to re-route all logs from containers in a specific namespace. The following is the code for my ingest pipeline

PUT _ingest/pipeline/logs-kubernetes.container_logs@custom
{
  "processors": [
    {
      "reroute": {
        "namespace": [
          "{{ kubernetes.namespace }}"
        ],
        "if": "((ctx?.kubernetes?.namespace != null) && (ctx.kubernetes.namespace =='fi1-https'))"
      }
    }
  ]
}

When I test the pipeline, I am able to see that pipeline is expected to create a new index. But after I save the pipeline, I don't see the new index created. Can you shared some tips on how to debug this issue and if there is anything wrong in my pipeline ?

{
  "docs": [
    {
      "processor_results": [
        {
          "processor_type": "reroute",
          "status": "success",
          "if": {
            "condition": "((ctx?.kubernetes?.namespace != null) && (ctx.kubernetes.namespace =='fi1-https'))",
            "result": true
          },
          "doc": {
            "_index": ".ds-kubernetes.container_logs-fi1-https",
            "_version": "1",
            "_id": "c8RmzJQBRUQciHblRJBL",
            "_source": {

Namespaces cannot have a - in the name, it is not allowed.

You would need to add an extra field and replace the - with a _.

Hi Leandro,
Thanks for the help. I updated my ingest pipeline as suggested, but I still see the same issue

PUT _ingest/pipeline/logs-kubernetes.container_logs@custom
{
  "processors": [
    {
      "set": {
        "field": "k8s_elastic_namespace",
        "value": "{{ kubernetes.namespace }}",
        "if": "((ctx?.kubernetes?.namespace != null) && (ctx.kubernetes.namespace.contains('fi1-https')))",
        "ignore_failure": true
      }
    },
    {
      "gsub": {
        "field": "k8s_elastic_namespace",
        "pattern": "-",
        "replacement": "_",
        "ignore_missing": true,
        "if": "ctx.k8s_elastic_namespace != null",
        "ignore_failure": true
      }
    },
    {
      "reroute": {
        "namespace": [
          "{{k8s_elastic_namespace}}"
        ],
        "if": "ctx.k8s_elastic_namespace != null",
        "ignore_failure": true
      }
    }
  ]
}

Test pipeline output shows proper index

{
  "docs": [
    {
      "doc": {
        "_index": ".ds-kubernetes.container_logs-fi1_https",
        "_version": "1",
        "_id": "w4y8zJQB-OigOW8EhVWn",
        "_source": {
          "container": {
            "image": {

What do you have in the error.message field?

If the reroute processor is failing for any reason, you will have a message in this field because of the global on_failure processor.

The reroute processor is pretty simple, there is not much else to configure.

The moment I add reroute processor, all container messages from fi1-https messages are no longer seen. I also don't see the new index created.

You do not have any more messages from your kubernetes after adding the reroute processor?

Also, the new index would be a datastream named logs-kubernetes.container_logs-fi1_https.

Try to remove all the ignore_failure from the processors in the custom ingest pipeline to see where it fails.

I have messages from other kubernetes namespaces except for fi1-https. Should I look for ignore_failure in logs-* data view or some other data view ?

Yes, if the reroute processor is failing then you will not have the data stream that you want to create with it, the log will be in the original data stream.

Looks like I am getting security exception due to permission issues:
{"type":"security_exception","reason":"action [indices:admin/auto_create] is unauthorized for API key id of user [elastic/fleet-server] on indices [logs-kubernetes.container_logs-fi1-https], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}

I created the ingest pipeline as a superuser, but still I am seeing this error.

Yeah, this is what I was thinking could be the issue.

What version of the Stack and the integration are you using?

The permissions for your user does not matter, they are not used, the fleet managed Elastic Agent uses API keys for each integration with pretty limited permissions.

I'm assuming that you are using an old Kubernetes integration version, these permissions are available from version 1.42.0 as you can check it here.

- version: "1.42.0"
  changes:
    - description: Add permissions to reroute events to logs-*-* for container_logs datastream
      type: enhancement
      link: https://github.com/elastic/integrations/pull/6340```

You can not do this... sorry Fleet + Agent create very strict API Keys under the covers that does not allow rerouting to other data streams.

@umesh2020 What are you actually trying to accomplish?

Some integrations and data streams already can do that, which is the case of the kubernetes.container_logs.

This was added in this PR.

I'm using this on production for other integrations like the Kafka Custom Logs and the Cloudwatch Logs.

1 Like

Ahh yes LOL I am using on one of my K8s now that I'm looking... But when I saw his permission error... The Agent API keys are usually what caused that.

If you go outside of the certain bounds, that's when you'll run into trouble.

We have a multi-tenant application running in k8s cluster. Each tenant application runs in separate k8s namespace, so I want to store logs specific to each tenant in a separate index. Then I plan to create a separate kibana namespace per tenant and provide access to only logs of that tenant

1 Like

Let me check when I get back to my desk...

And technically you're trying to create different data streams...

I think you'll want to use namespace when I get back to my desk I can triple check. See what I did

What is the version of your Kubernetes integration in Fleet?

What you want to do, reroute to a different namespace, is possible, but the integration version needs to be at least 1.42.0.

OK. That's what the reroute is supposed to do right ? I am using version 1.36, will upgrade and let you know

So here is how I do it... I like building Composable Pipelines...

PUT _ingest/pipeline/kubernetes.container_logs@custom
{
  "processors": [
    {
      "set": {
        "field": "event.dataset",
        "ignore_empty_value": true,
        "if": "ctx?.event?.dataset == null",
        "copy_from": "data_stream.dataset"
      }
    },
    {
      "pipeline": {
        "name": "sendtoistio",
        "if": "ctx?.kubernetes?.container?.name == 'istio-proxy' || ctx?.k8s?.container?.name == 'istio-proxy'"
      }
    }
  ]
}

GET _ingest/pipeline/sendtoistio
{
  "processors": [
    {
      "set": {
        "field": "data_stream.dataset",
        "value": "istio.access_logs"
      }
    },
    {
      "set": {
        "field": "data_stream.namespace",
        "value": "default"
      }
    },
    {
      "set": {
        "field": "event.dataset",
        "value": "{{data_stream.dataset}}"
      }
    },
    {
      "reroute": {
        "dataset": "{{data_stream.dataset}}",
        "namespace": "{{data_stream.namespace}}"
      }
    }
  ]
}

Yeah, you need to upgrade to version 1.42.0, after that your reroute processor to change the namespace is expected to work.

2 Likes

Thanks Leandro and Stephan. It's working after I upgraded the kubernetes intergration to latest. @stephenb Will try your suggestion of building composable pipelines and update

1 Like