Hi
I am using elastic-agent in k8s cluster with kubernetes integration. I have added a custom pipeline for kubernetes container logs to re-route all logs from containers in a specific namespace. The following is the code for my ingest pipeline
When I test the pipeline, I am able to see that pipeline is expected to create a new index. But after I save the pipeline, I don't see the new index created. Can you shared some tips on how to debug this issue and if there is anything wrong in my pipeline ?
Yes, if the reroute processor is failing then you will not have the data stream that you want to create with it, the log will be in the original data stream.
Looks like I am getting security exception due to permission issues:
{"type":"security_exception","reason":"action [indices:admin/auto_create] is unauthorized for API key id of user [elastic/fleet-server] on indices [logs-kubernetes.container_logs-fi1-https], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}
I created the ingest pipeline as a superuser, but still I am seeing this error.
Yeah, this is what I was thinking could be the issue.
What version of the Stack and the integration are you using?
The permissions for your user does not matter, they are not used, the fleet managed Elastic Agent uses API keys for each integration with pretty limited permissions.
I'm assuming that you are using an old Kubernetes integration version, these permissions are available from version 1.42.0 as you can check it here.
- version: "1.42.0"
changes:
- description: Add permissions to reroute events to logs-*-* for container_logs datastream
type: enhancement
link: https://github.com/elastic/integrations/pull/6340```
Ahh yes LOL I am using on one of my K8s now that I'm looking... But when I saw his permission error... The Agent API keys are usually what caused that.
If you go outside of the certain bounds, that's when you'll run into trouble.
We have a multi-tenant application running in k8s cluster. Each tenant application runs in separate k8s namespace, so I want to store logs specific to each tenant in a separate index. Then I plan to create a separate kibana namespace per tenant and provide access to only logs of that tenant
Thanks Leandro and Stephan. It's working after I upgraded the kubernetes intergration to latest. @stephenb Will try your suggestion of building composable pipelines and update
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.