Enable Elastic APM integration automatically

I've created a docker-compose file with some configurations that deploy Elasticsearch, Kibana, Elastic Agent all version 8.7.0.
where in the Kibana configuration files I define the police I needed under xpack.fleet.agentPolicies, with single command all my environment goes up and all component connect successfully, the only issue is there is one manual step, which is I had to go to Kibana -> Observability -> APM -> Add Elastic APM and then fill the Server configuration.

I want to automate this and manage this from the configuration files, I don't want to do it from the UI.

What is the way to do this? in which component? what is the path the configuration should be at?

Hi there,

You should be able to set up APM by default in kibana.yml alongside the agent policy you have set up.

First you'll need to add the APM integration:

xpack.fleet.packages:
  - name: apm
    version: latest

Next, add APM to your agent policy:

xpack.fleet.agentPolicies:
  - name: {your policy name}
    {any additional policy configuration you have set up}
    package_policies:
      - name: Your APM policy name
        package:
          name: apm
          inputs:
          - type: apm
            vars:
              - name: host
                value: "localhost:8200"
              - name: url
                value: "http://localhost:8200"

This assume you just need to change the Host and URL fields. Everything else will be filled in with default values (they'll be what you see pre-populated in the UI).

Thanks for replying @jen-huang , but unfortunately that doesn't work, I still can't see the integration added, I still have to add it from the UI (Kibana -> Observability -> APM -> Add Elastic APM and then fill the Server configuration.), so what I'm missing here?

Thanks

This is my docker-compose file

version: '3.9'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.7.0-188e6a3a-SNAPSHOT
    container_name: container_name_elasticsearch
    ports:
      - 9200:9200
    healthcheck:
      test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=500ms"]
      retries: 300
      interval: 1s
    environment:
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
      - "network.host=0.0.0.0"
      - "transport.host=127.0.0.1"
      - "http.host=0.0.0.0"
      - "cluster.routing.allocation.disk.threshold_enabled=false"
      - "discovery.type=single-node"
      - "xpack.security.authc.anonymous.roles=remote_monitoring_collector"
      - "xpack.security.authc.realms.file.file1.order=0"
      - "xpack.security.authc.realms.native.native1.order=1"
      - "xpack.security.enabled=true"
      - "xpack.license.self_generated.type=trial"
      - "xpack.security.authc.token.enabled=true"
      - "xpack.security.authc.api_key.enabled=true"
      - "logger.org.elasticsearch=${ES_LOG_LEVEL:-error}"
      - "action.destructive_requires_name=false"
      - "ELASTIC_PASSWORD=123456"
    volumes:
      - "./elasticsearch/roles.yml:/usr/share/elasticsearch/config/roles.yml"
      - "./elasticsearch/users:/usr/share/elasticsearch/config/users"
      - "./elasticsearch/users_roles:/usr/share/elasticsearch/config/users_roles"
      - "./elasticsearch/ingest-geoip:/usr/share/elasticsearch/config/ingest-geoip"

  kibana:
    image: docker.elastic.co/kibana/kibana:8.7.0-188e6a3a-SNAPSHOT
    container_name: container_name_kibana
    ports:
      - 5601:5601
    healthcheck:
      test: ["CMD-SHELL", "curl -s http://localhost:5601/api/status | grep -q 'All services are available'"]
      retries: 300
      interval: 1s
    environment:
      ELASTICSEARCH_HOSTS: '["http://elasticsearch:9200"]'
      ELASTICSEARCH_USERNAME: "${KIBANA_ES_USER:-kibana_system_user}"
      ELASTICSEARCH_PASSWORD: "${KIBANA_ES_PASS:-changeme}"
      XPACK_FLEET_AGENTS_FLEET_SERVER_HOSTS: '["https://fleet-server:8220"]'
      XPACK_FLEET_AGENTS_ELASTICSEARCH_HOSTS: '["http://elasticsearch:9200"]'
    depends_on:
      elasticsearch: { condition: service_healthy }
    volumes:
      - "./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml"

  fleet-server:
    image: docker.elastic.co/beats/elastic-agent:8.7.0-188e6a3a-SNAPSHOT
    container_name: container_name_fleet_server
    ports:
      - 8220:8220
      - 8200:8200
    healthcheck:
      test: ["CMD-SHELL", "curl -s -k https://localhost:8220/api/status | grep -q 'HEALTHY'"]
      retries: 300
      interval: 1s
    environment:
      FLEET_SERVER_ENABLE: "1"
      FLEET_SERVER_POLICY_ID: "fleet-server-apm"
      FLEET_SERVER_ELASTICSEARCH_HOST: http://elasticsearch:9200
      FLEET_SERVER_ELASTICSEARCH_USERNAME: "${ES_SUPERUSER_USER:-admin}"
      FLEET_SERVER_ELASTICSEARCH_PASSWORD: "${ES_SUPERUSER_PASS:-changeme}"
      FLEET_SERVER_CERT: /etc/pki/tls/certs/fleet-server.pem
      FLEET_SERVER_CERT_KEY: /etc/pki/tls/private/fleet-server-key.pem
      FLEET_URL: https://fleet-server:8220
      KIBANA_FLEET_SETUP: "true"
      KIBANA_FLEET_HOST: "http://kibana:5601"
      KIBANA_FLEET_USERNAME: "${ES_SUPERUSER_USER:-admin}"
      KIBANA_FLEET_PASSWORD: "${ES_SUPERUSER_PASS:-changeme}"
    depends_on:
      elasticsearch: { condition: service_healthy }
      kibana: { condition: service_healthy }
    volumes:
      - "./fleet-server/certificate.pem:/etc/pki/tls/certs/fleet-server.pem"
      - "./fleet-server/key.pem:/etc/pki/tls/private/fleet-server-key.pem"

and this is my kibana.yml configuration file

server.host: 0.0.0.0
status.allowAnonymous: true
monitoring.ui.container.elasticsearch.enabled: true
telemetry.enabled: false
xpack.security.encryptionKey: fhjskloppd678ehkdfdlliverpoolfcr
xpack.encryptedSavedObjects.encryptionKey: fhjskloppd678ehkdfdlliverpoolfcr

xpack.fleet.packages:
  - name: fleet_server
    version: latest
xpack.fleet.agentPolicies:
  - name: Fleet Server (APM)
    id: fleet-server-apm
    is_default_fleet_server: true
    is_managed: false
    namespace: default
    package_policies:
      - name: fleet_server-apm
        id: default-fleet-server
        package:
          name: fleet_server
        inputs:
          - type: apm
            enabled: true
            vars:
              - name: host
                value: "0.0.0.0:8200"
              - name: url
                value: "http://0.0.0.0:8200"
              - name: enable_rum
                value: true
                frozen: true

  - name: Fleet Server2 (APM)
    id: fleet-server-apm2
    is_default_fleet_server: true
    is_managed: false
    namespace: default
    package_policies:
      - name: fleet_server-apm2
        id: default-fleet-server2
        package:
          name: fleet_server

xpack.profiling.enabled: true

It looks like you are mixing up Fleet Server and APM. Both of your agent policies are creating Fleet Server policies. Do you want both Fleet Server and APM on the same policy? If so, try:

xpack.fleet.agentPolicies:
  - name: Fleet Server (APM)
    id: fleet-server-apm
    is_default_fleet_server: true
    is_managed: false
    namespace: default
    package_policies:
      - name: Fleet Server policy
        id: default-fleet-server
        package:
          name: fleet_server
      - name: APM policy
        package:
          name: apm
          inputs:
          - type: apm
            vars:
              - name: host
                value: "localhost:8200"
              - name: url
                value: "http://localhost:8200"

Forgot to add, you also need to update the list of packages: