RBAC on multiple project RUM data

Hello Team,

We have formed observability setup receiving data from multiple projects with basic license. and all projects data going to common index of APM & RUM.

Elasticsearch - 8.4.3
Kibana - 8.4.3
APM - 8.4.3

We have given seperate service name to the services of different projects.

Problem STatement:
Here we need to setup RBAC to allow APM & RUM data access respective to there projects based on service name. is there any possibility to setup RBAC to fulfill this requirement


Hi @pratikshatiwari , if it's acceptable, try to create different indices and spaces for both projects, it would be a much better and manageable solution for multi-tenant ESaaS.
If you still want to have RBAC based on service name, you can create query based roles and map them to your users. Refer: Defining roles | Elasticsearch Guide [8.6] | Elastic

Hi @Ayush_Mathur

Thank you for your reply

As per first solution it requires seperate target index for each project, but my problem is how do i set the seperate index name in APM & RUM configuration in one elasticsearch cluster


if project A sends the APM & RUM data to elasticsearch cluster "XYZ "it should go to index "A"
if project B sends the APM & RUM data to elasticsearch cluster "XYZ "it should go to index "B"

but i am not sure how do i achieve this if the cluster is same for both the cluster as APM & RUM create its own index format

And for the second solution to achieve field based authentication we may need minimum platinum license which we are not planning

Kindly suggest


Hi @Ayush_Mathur

Also PFB configuration detail how i have setup my application to send data to APM server to send APM & RUM data

APM Configuration as below:


"ElasticApm": {
"ServerUrls": "https://XX.XX.XX.XX:8200",
"ServerCert": "~/lib/cert/ca.crt",
"VerifyServerCert": "true",
"SecretToken": "",
"ServiceName": "ecommerce-eshop",
"Environment": "DEV"

RUM configuration appeneded in html file setting as below:

<script src="~/lib/rum/elastic-apm-rum.umd.min.js"></script>
        serviceName: 'ecommerce-rum',
        serverUrl: 'https://XX.XX.XX.XX:8200',
        "ServerCert": "~/lib/cert/ca.crt",
        //"VerifyServerCert": "true",
        environment: 'DEV'

I believe there was something similar raised in Elastic github which provides some alternatives for the first option: Support to create index for each service · Issue #4025 · elastic/apm-server · GitHub

Also, you can probably update the underlying index template according to your requirements: View the Elasticsearch index template | APM User Guide [8.6] | Elastic

OR create an ingest pipeline to define your index before actually storing the document in Elasticsearch: Parse data using ingest pipelines | APM User Guide [master] | Elastic

Hi @Ayush_Mathur

Thank you again for reply, i am trying to follow the comments you share, will you be able to help with sample configuration where i need to apply the changes

As per shared links i need to create ingest pipeline in elasticsearch cluster but how do i map the pipeline to my agents

Do u have any sample configuration which i can refer


Hi @pratikshatiwari , you can possibly follow this section of Elastic documentation: Ingest pipelines | Elasticsearch Guide [8.6] | Elastic

hi @Ayush_Mathur

i will try this out in my setup at present i am not finding correct steps to implement this approach. if you have some will helpful


Hi @pratikshatiwari I haven't worked in particular with Elastic Agent or APM, but the basic principle behind ingest pipelines remains the same. You need to identify some field based on which you can move the log to a particular index. Once you have that, you can use script processor pipeline to update ctx._index based on string matching/existence of your field.

For instance, if I know that agent.module=apm for APM logs and agent.module=rum for RUM logs, I can create a script processor based pipeline as:

    "description": "decide index:apm.* or index:rum.*",
    "processors": [{
            "script": {
                "source": """if(ctx['@timestamp'] != null) {
                               def year = ctx['@timestamp'].substring(0,4); def month = ctx['@timestamp'].substring(5,7); def date = ctx['@timestamp'].substring(8,10);
                               StringBuffer buff;
                               if(ctx['agent.module'].equals("apm") { 
                                 buff = new StringBuffer('**apm**.'.concat(year).concat('.').concat(month).concat('.').concat(date) 
                               } else if(ctx['agent.module'].equals("rum") { 
                                   buff = new StringBuffer('**rum**.'.concat(year).concat('.').concat(month).concat('.').concat(date) 
                               ctx._index = buff.toString();

This should tell ES that the above log will be stored in ctx._index index which is essentially updated in the pipeline. By default, ctx._index is populated by log shipper based on indices setting defined in its configuration and hence logs are stored in that index.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.