Loose output permissions given to Elastic Agent

Hi all!
If in Fleet I configure a Policy without any integration, the Elastic Agents associated with that Policy receive output permissions on logs-*, metrics-*, traces-* and synthetics-*.
This means that these Elastic Agents will be able to write on namespaces which are not their namespaces, compromising the security in terms of separation of the namespaces.
If I use namespaces to separate the tenancy of the data, I expect that removing all integrations from a policy will not allow the tenant of the agent to write data into the datastreams of other tenants.

In particular by inspecting the Elastic Agent with no integration configured, we can see that a "_fallback" permission is added, which allows to write on all namespaces.

output_permissions:
  default:
    _elastic_agent_checks:
      cluster:
      - monitor
    _elastic_agent_monitoring:
      indices:
      - names:
        - logs-elastic_agent.apm_server-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.apm_server-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.auditbeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.auditbeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.elastic_agent-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.endpoint_security-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.endpoint_security-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.filebeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.filebeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.fleet_server-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.fleet_server-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.heartbeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.heartbeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.metricbeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.metricbeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.osquerybeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.osquerybeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.packetbeat-default
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.packetbeat-default
        privileges:
        - auto_configure
        - create_doc
    _fallback:
      cluster:
      - monitor
      indices:
      - names:
        - logs-*
        - metrics-*
        - traces-*
        - synthetics-*
        - .logs-endpoint.diagnostic.collection-*
        privileges:
        - auto_configure
        - create_doc

Hi @DamianoChini thanks for reporting that issue.

Unfortunately this _fallback permissions is currently the expected behavior when an agent policy does not have any integrations.
We are working on fixing this by removing this concept of default permissions [Fleet] Reduce DEFAULT_PERMISSIONS · Issue #119562 · elastic/kibana · GitHub
In the mean time I really encourage you to add an integration to mitigate that issue.

1 Like

Ok good! I didn't find the open github issue previously,
thank you for the reply!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.