Fleet Server Agent with Public URL can't be enrolled

I essentially have the same issue as the person in this thread :

The OP of that thread didn't really resolve the question for other people reading it.

Similarly to that thread, I am trying to set up fleet-agent and agents on my Kubernetes cluster, but instead of using the auto-generated self-signed TLS certs I want to use my own. Specifically, because the fleet agent HTTP endpoint should be accessible via Ingress outside the cluster in order to be able to enroll agents installed outside the cluster.

However, there is currently zero documentation on how to achieve this. I have tried to set xpack.fleet.agents.fleet_server.hosts to the public URL but the fleet server agent fails to enroll itself, because it keeps trying to enroll to

{"log.level":"info","@timestamp":"2023-03-23T05:14:42.333Z","log.origin":{"file.name":"cmd/enroll_cmd.go","file.line":475},"message":"Starting enrollment to URL: https://fleet-server-agent-http.elastic-stack.svc:8220/","ecs.version":"1.6.0"}

which of course will fail because the TLS certs that I specified is only valid for certain domains, not the internal kubernetes host names.

My values.yaml :

eck-kibana:
  enabled: true

  # Name of the Kibana instance.
  #
  fullnameOverride: kibana
  
  spec:
    # Reference to ECK-managed Elasticsearch instance, ideally from {{ "elasticsearch.fullname" }}
    #
    elasticsearchRef:
      name: elasticsearch
    enterpriseSearchRef:
      name: enterprise-search
    http:
      service:
        spec:
          # Type of service to deploy for Kibana.
          # This deploys a load balancer in a cloud service provider, where supported.
          # 
          type: LoadBalancer

    config:
      # Note that these are specific to the namespace into which this example is installed, and are
      # using `elastic-stack` as configured here and detailed in the README when installing:
      #
      # `helm install es-kb-quickstart elastic/eck-stack -n elastic-stack`
      #
      # If installed outside of the `elastic-stack` namespace, the following 2 lines need modification.
      xpack.fleet.agents.elasticsearch.hosts: ["https://elasticsearch-es-http.elastic-stack.svc:9200"]
      xpack.fleet.agents.fleet_server.hosts: ["https://fleet-server.example.com"]
      xpack.fleet.outputs:
      - id: fleet-default-output
        name: default
        type: elasticsearch
        hosts: [ https://elasticsearch.example.com ]
        # openssl x509 -fingerprint -sha256 -noout -in tls/kibana/elasticsearch-ca.pem (colons removed)
        ca_trusted_fingerprint: <my custom TLS CA fingerprint>
        is_default: true
        is_default_monitoring: true
      xpack.fleet.packages:
      - name: system
        version: latest
      - name: elastic_agent
        version: latest
      - name: fleet_server
        version: latest
      - name: kubernetes
        version: latest
      - name: apm
        version: latest
      xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy
        id: fleet-server
        namespace: default
        monitoring_enabled:
        - logs
        - metrics
        is_default_fleet_server: true
        package_policies:
        - name: fleet_server-1
          id: fleet_server-1
          package:
            name: fleet_server
      - name: Elastic Agent on ECK policy
        id: eck-agent
        namespace: default
        monitoring_enabled:
        - logs
        - metrics
        unenroll_timeout: 900
        is_default: true
        package_policies:
        - package:
            name: system
          name: system-1
        - package:
            name: kubernetes
          name: kubernetes-1
        - package:
            name: apm
          name: apm-1
          inputs:
            - type: apm
              enabled: true
              vars:
                - name: host
                  value: 0.0.0.0:8200

eck-agent:
  enabled: true
  spec:
    # Reference to ECK-managed Kibana instance.
    #
    kibanaRef:
      name: kibana

    elasticsearchRefs: []

    # Reference to ECK-managed Fleet instance.
    #
    fleetServerRef:
      name: fleet-server
    
    mode: fleet

    daemonSet:
      podTemplate:
        spec:
          serviceAccountName: elastic-agent
          hostNetwork: true
          dnsPolicy: ClusterFirstWithHostNet
          automountServiceAccountToken: true
          securityContext:
            runAsUser: 0
  
eck-fleet-server:
  enabled: true

  fullnameOverride: "fleet-server"

  spec:
    kibanaRef:
      name: kibana
    elasticsearchRefs:
    - name: elasticsearch

    http:
      tls:
        selfSignedCertificate:
          subjectAltNames:
          - dns: fleet-server.example.com
        certificate:
          secretName: tls-secret

How can I make fleet server deployed on Kubernetes be accessible by both internal cluster agents and external agents?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.