Connecting to remote fleet server

When configuring fleet server on the kubernetes cluster running the elasticssearch/kibana as well works, but when I try to use the same fleet server from a remote cluster the config I receive points to the "service" url in the fleet server cluster.

Fleet server running in cluster 1: https://fleet-server-agent.namespace.svc:8220

A load balancer than makes it available on https://monitoring.example.com:8220

Remote fleet agent is connecting to https://monitoring.example.com:8220

But then it receive fleet url as https://fleet-server-agent.namespace.svc:8220 even though i am creating a daemonset on remote cluster with env variable pointing to https://monitoring.example.com:8220

Do i need to run a fleet server on every cluster i create?

Are you using ECK to host things?

Yes ECK to host

  1. Elasticsearch
  2. Kibana (User ref)
  3. APM Server (using ref)
  4. Fleet Server (using refs)
  5. Elastic Agent (Using Refs)

Second cluster

  1. Elastic Agent (using daemonset)

Kibana Config

elasticsearch:
  hosts:
  - https://cluster-es-http.esstack.svc:9200
  serviceAccountToken: <value>
  ssl:
    certificateAuthorities: /usr/share/kibana/config/elasticsearch-certs/ca.crt
    verificationMode: certificate
monitoring:
  ui:
    container:
      elasticsearch:
        enabled: true
server:
  host: 0.0.0.0
  name: es-cluster
  publicBaseUrl: https://monitoring.example.com
  ssl:
    certificate: /mnt/elastic-internal/http-certs/tls.crt
    enabled: true
    key: /mnt/elastic-internal/http-certs/tls.key
xpack:
  encryptedSavedObjects:
    encryptionKey: <value>
  fleet:
    agentPolicies:
    - id: eck-fleet-server
      is_default_fleet_server: true
      monitoring_enabled:
      - logs
      - metrics
      name: Fleet Server on ECK policy
      namespace: fleetserver
      package_policies:
      - id: fleet_server-1
        name: fleet_server-1
        package:
          name: fleet_server
    - id: eck-agent
      is_default: true
      monitoring_enabled:
      - logs
      - metrics
      name: Elastic Agent on ECK policy
      namespace: kubernetes
      package_policies:
      - id: system-1
        name: system-1
        package:
          name: system
      - id: kubernetes-1
        name: kubernetes-1
        package:
          name: kubernetes
      unenroll_timeout: 900
    agents:
      elasticsearch:
        hosts:
        - https://monitoring.example.com:9200
      fleet_server:
        hosts:
        - https://monitoring.example.com:8220
    packages:
    - name: system
      version: latest
    - name: elastic_agent
      version: latest
    - name: fleet_server
      version: latest
    - name: kubernetes
      version: latest
    - name: aws
      version: latest
  license_management:
    ui:
      enabled: false
  reporting:
    encryptionKey: <value>
  security:
    authc:
      providers:
        basic:
          basic1:
            order: 1

To reproduce the problem:

  1. 2 Kubernetes clusters
  2. ES/Kibana/Fleet/Agent on Cluster 1
  3. Expose Fleet/Kibana on cluster 1 using loadbalancer
  4. Setup Agents only on cluster 2 to use fleet server on Cluster 1.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.