Logstash, RBAC and granted document level security not returning results in Kibana

Hello Everyone,
Hoping someone will be able to help me out. This is my first message on this forum, and I am absolutely stuck! My apologies if long-winded, though it's the only way for me to stage the situation. Plus, I am hoping my code is formatted appropriately in this topic. Let me first mention that I have three, on-prem, multi-cluster Elastic environments residing within DEV, UAT and PROD. Permissions to access documents via Kibana are leveraging both role-based security as well as document level security. Any partner within our organization has access to Kibana by simply use their enterprise login ID and password. Once logged in, then heading over to discover, a partner chooses which index they wish to view data. Now, the only way a partner will ever see their information is if, and only if, they reside within one of three Active Directory groups which are added via a filter, mutate, "add_fields" within my logstash pipeline. I just rebuilt out DEV clusters and thought reconfigured everything correctly, and as luck would have it, partners, cannot see their log messages and documents, even though the incoming messages have the correct fields.

Incoming messages flow from filebeat to one or many Kafka topics. Other message sources not using filebeat flow into Kafka topics, and then I use logstash to drain it. For sake of discussion, messages can either be plain-text or Confluent avro. While in logstash, I perform the usual input, filter, mutate, add_fields, then output. Here is an example of my logstash pipeline managed by Kibana:

input {
	kafka {
		bootstrap_servers => "zz12345.zzzz.com:9094,zz12346.zzzz.com:9094,zz12347.zzz.com:9094"
        topics => ["zz_kaf_avro_topic", "zz_rdr_fms_1.zzzkafs1.sourcedb.zzzin1zzz.accpos"]
        decorate_events => true
        tags => "avro"
		id => "confluent_avro"
        codec => avro_schema_registry {
          endpoint => "http://zz123459.zzzz.com:8081"
        }
        value_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
    }
    kafka {
		bootstrap_servers => "zz12345.zzzz.com:9094,ut12346.zzzz.com:9094,zz12347.zzzz.com:9094"
        topics => ["zz_zzz_rep_holding_all_sit","zz_zzz_rep_nav_share_price_all_sit","zz_zzz_confirmed_prices_sit","zz_zzz_updated_holding_sit"]
        decorate_events => true
        tags => "plain"
		id => "plain_text"
    }
    beats {
        port => 43600
        tags => "filebeat_kafka"
		id => "filebeat_kafka"
    }
}
filter {
    if "filebeat_kafka" in [tags] {
        grok {
            match => {
                "message" => "%{LOGLEVEL:loglevel}"
            }
            tag_on_failure => "no_loglevel"
        }    
        mutate {
            add_field => { 
                "[msk][acl01]" => "xxx-p-abc-user"
                "[msk][acl02]" => "xxx-p-abc-reviewer"
                "[msk][acl03]" => "xxx-p-abc-user-admin"
                "[topic]" => { "%{[@metadata][kafka][topic]}"
                "[offset]" => { "%{[@metadata][kafka][offset]}"
                "[partition]" => { "%{[@metadata][kafka][partition]}"
            }
        }
    }
    else if "confluent_avro" in [id] {
        mutate {
            add_field => {
                "[msk][acl01]" => "xxx-p-abc-user"
                "[msk][acl02]" => "xxx-p-abc-reviewer"
                "[msk][acl03]" => "xxx-p-abc-user-admin"
                "[topic]" => { "%{[@metadata][kafka][topic]}"
                "[offset]" => { "%{[@metadata][kafka][offset]}"
                "[partition]" => { "%{[@metadata][kafka][partition]}"
            }
        }
    } else {
        mutate {
            add_field => {
                "[msk][acl01]" => "xxx-p-abc-user"
                "[msk][acl02]" => "xxx-p-abc-reviewer"
                "[msk][acl03]" => "xxx-p-abc-user-admin"
                "[topic]" => { "%{[@metadata][kafka][topic]}"
                "[offset]" => { "%{[@metadata][kafka][offset]}"
                "[partition]" => { "%{[@metadata][kafka][partition]}"
            }
        }
    }
}
output {
        elasticsearch {
          hosts => [ "tasks.elk-zzzz-zz01234-dcaas-s4-cn:9200","tasks.elk-zzzz-zz01235-dcaas-s4-cn:9200","tasks.elk-zzzz-zz01236-dcaas-s4-cn:9200" ]
            index => "kafka-%{[topic]}-%{+YYYY.MM.dd}"
            user => "${PIPELINE_USER}"
            password => "${PIPELINE_PD}"
        }
}

As mentioned above our clusters are using a combination of RBAC and document level security. Here is a snippet of my yml:

xpack.security.authc:
    realms:
        file.f1:
           order: 0
        native.n1:
           order: 1
        active_directory.zzzz:
           order: 2
           enabled: true
           url: "ldap://ent.ad.zzzz.com"
           domain_name: "ent.ad.zzzz.com"
           load_balance.type: dns_round_robin
           unmapped_groups_as_roles: true

In Kibana, I have created a role called dev_user leveraging a granted documents query with read-only permissions. The "built-in" Kibana user is also a member of this role. The idea here is that any internal user can log into Kibana, though those users who happen to reside within one of the three AD/ldap groups noted in the above pipeline key-value pairs starting with [msk][acl01], [msk][acl02] or [msk][acl03] should be able to view the documents. Now this was absolutely working before I rebuilt our DEV environment, and my UAT and PROD clusters without issue. Part of me thinks that my granted document query could be written more efficiently. Another part of me thinks that I should be using a different load.balance.type in the above xpack.security.authc. I am using dns_round_robin (only because it was working before).

Here is my granted document query:

{
	"template":
	{
		"source":"
		{
			\"bool\": 
			{
				\"filter\": 
				{
					\"bool\": 
						{
							\"should\": [
								{
									\"terms\": 
									{
										\"msk.acl01\": {{#toJson}}_user.roles{{/toJson}}
									}
								},
								{
									\"terms\": 
									{
										\"msk.acl02\": {{#toJson}}_user.roles{{/toJson}}
									}
								},
								{
									\"terms\": 
									{
										\"msk.acl03\": {{#toJson}}_user.roles{{/toJson}}
									}
								}
							]
						}
				}
			}
		}"
	}
}

Here is my GET /_security/role_mapping. What I have never been sure of is this role mapping name of xxxx_ad_generic_role_mapping, and aside from it being present in all my other environments and cluster via a PUT command. I think it is just enabling mappings for: xxxx_ad_generic_role, kibana_user, and machine_learning_admin

{
  "xxxx_ad_generic_role_mapping" : {
    "enabled" : true,
    "roles" : [
      "xxxx_ad_generic_role",
      "kibana_user",
      "machine_learning_admin"
    ],
    "rules" : {
      "field" : {
        "groups" : "CN=Users,CN=Builtin,DC=ent,DC=ad,DC=zzzz,DC=com"
      }
    },
    "metadata" : { }
  }
}

I've scoured this forum and other sites and I am flat-out stuck. Any guidance is appreciated.

Regards,
-Tony