Fleet Server Agent Not able to 'identify' kibana secret in ECK

Environment:
We are managing our own k8s enviornment and have our kibana/Elasticsearch w/o any problems.
Filebeat and Metricbeat are working fine. No issues. Observability & Elastic APM is working as per expectation.
Filebeat and Metric beat run in our production/preproduction/dev clusters and they post the data to different K8S cluster having observability solution from Elasticsearch. Things work beautifully.

We want to move towards Fleet.

We have k8s cluster.
ECK is: docker.elastic.co/eck/eck-operator:2.7.0 | namespace elastic-system
Kibana:
docker.elastic.co/kibana/kibana:8.8.2 | namespace elksdev | exposed in our organization
Elasticsearch:
docker.elastic.co/elasticsearch/elasticsearch:8.8.2 | namespace elksdev | exposed in our organization
K8S details:
Client Version: v1.25.0
Kustomize Version: v4.5.7
Server Version: v1.25.10

Fleet-Agent in fleet mode: docker.elastic.co/beats/elastic-agent:8.5.2

Target of exercise:
Fleet-Agent and Elastic Agent should run on say: pre-production or dev cluster of application development.

This Fleet-Agent(ElasticAgent in fleet mode) should connect to Elasticsearch and Kibana managed by different cluster.
I understand we need to use spec.kibanaRef.secretName and spec.elasticSearchRefs.secretName

Correct me if I am wrong.

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server
  namespace: elksdev
spec:
  deployment:
    podTemplate:
      metadata:
        creationTimestamp: null
      spec:
        automountServiceAccountToken: true
        containers: null
        securityContext:
          runAsUser: 0
        serviceAccountName: elastic-agent
    replicas: 1
    strategy: {}
  elasticsearchRefs:
    - name: elastic-fleet-secret
  fleetServerEnabled: true
  fleetServerRef: {}
  http:
    service:
      metadata: {}
      spec: {}
    tls:
      certificate: {}
  kibanaRef:
    secretName: kibana-fleet-secret
  mode: fleet
  policyID: eck-fleet-server
  version: 8.5.2

What is working?
when kibanaRef and elasticsearchRefs are pointing to Kibana and Elasticsearch resources in the same cluster, things work. Meaning: connection established/ No k8s Events with warning / errors.
e.g. spec.kibanaRef.name=kibana-dev, no issues at all. Meaning: everything is green.

Now this kibana-dev, when I expose outside k8s cluster (via my ingress/and use secret), it seems things do not work.

When I use secrets: Things do not work.
kubectl get events --field-selector involvedObject.kind=Agent -n elksdev

LAST SEEN TYPE REASON OBJECT MESSAGE
9m45s Warning AssociationError agent/elastic-agent Failed to find referenced backend elksdev/: Elasticsearch.elasticsearch.k8s.elastic.co "" not found
3m23s Warning AssociationError agent/elastic-agent Failed to find referenced backend elksdev/: Elasticsearch.elasticsearch.k8s.elastic.co "" not found
8m23s Warning AssociationError agent/elastic-agent Association backend for kibana is not configured
2m39s Warning AssociationError agent/elastic-agent Association backend for kibana is not configured
8s Warning AssociationError agent/elastic-agent Failed to find referenced backend elksdev/: Elasticsearch.elasticsearch.k8s.elastic.co "" not found
24m Warning ReconciliationError agent/fleet-server Reconciliation error: Kibana.kibana.k8s.elastic.co "kibana-fleet-secret" not found
5m39s Warning ReconciliationError agent/fleet-server Reconciliation error: Kibana.kibana.k8s.elastic.co "kibana-fleet-secret" not found
5m2s Warning AssociationError agent/fleet-server Association backend for elasticsearch is not configured
5m2s Normal AssociationStatusChange agent/fleet-server Association status changed from [elksdev/elk-dev: Established] to [elksdev/elastic-fleet-secret: Pending]
2m39s Warning AssociationError agent/fleet-server Association backend for elasticsearch is not configured

How my kibana-fleet-secret is present? I am using default user 'elastic' as username and its pwd

kubectl describe secret -n elksdev kibana-fleet-secret 
Name:         kibana-fleet-secret
Namespace:    elksdev
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
ca.crt:    2499 bytes
password:  24 bytes
url:       50 bytes
username:  7 bytes

Logs also do not give much information. I pasting the log output here.

{"log.level":"info","@timestamp":"2023-09-15T06:17:58.364Z","log.logger":"manager","message":"Orphan secrets garbage collection complete","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0"}
{"log.level":"info","@timestamp":"2023-09-15T06:18:07.172Z","log.logger":"agent-kibana","message":"Starting reconciliation run","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","iteration":"6","namespace":"elksdev","agent_name":"elastic-agent"}
{"log.level":"info","@timestamp":"2023-09-15T06:18:07.174Z","log.logger":"agent-kibana","message":"Ending reconciliation run","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","iteration":"6","namespace":"elksdev","agent_name":"elastic-agent","took":0.001249477}
{"log.level":"debug","@timestamp":"2023-09-15T06:18:07.174Z","log.logger":"manager.eck-operator.events","message":"Failed to find referenced backend elksdev/: Elasticsearch.elasticsearch.k8s.elastic.co \"\" not found","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","type":"Warning","object":{"kind":"Agent","namespace":"elksdev","name":"elastic-agent","uid":"bb539008-0f3c-436a-8b08-a3db8f9b3118","apiVersion":"agent.k8s.elastic.co/v1alpha1","resourceVersion":"321106320"},"reason":"AssociationError"}
{"log.level":"info","@timestamp":"2023-09-15T06:18:07.179Z","log.logger":"agent-es","message":"Starting reconciliation run","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","iteration":"6","namespace":"elksdev","agent_name":"fleet-server"}
{"log.level":"info","@timestamp":"2023-09-15T06:18:07.180Z","log.logger":"agent-es","message":"Ending reconciliation run","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","iteration":"6","namespace":"elksdev","agent_name":"fleet-server","took":0.000204629}
{"log.level":"debug","@timestamp":"2023-09-15T06:18:07.732Z","log.logger":"elasticsearch-observer","message":"Retrieving cluster health","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","es_name":"elk-dev","namespace":"elksdev"}
{"log.level":"debug","@timestamp":"2023-09-15T06:18:07.732Z","log.logger":"elasticsearch-observer","message":"Elasticsearch HTTP request","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","method":"GET","url":"https://elk-dev-es-internal-http.elksdev.svc:9200/_cluster/health","namespace":"elksdev","es_name":"elk-dev"}
{"log.level":"info","@timestamp":"2023-09-15T06:18:17.175Z","log.logger":"agent-kibana","message":"Starting reconciliation run","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","iteration":"7","namespace":"elksdev","agent_name":"elastic-agent"}
{"log.level":"info","@timestamp":"2023-09-15T06:18:17.176Z","log.logger":"agent-kibana","message":"Ending reconciliation run","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","iteration":"7","namespace":"elksdev","agent_name":"elastic-agent","took":0.000762825}
{"log.level":"debug","@timestamp":"2023-09-15T06:18:17.176Z","log.logger":"manager.eck-operator.events","message":"Failed to find referenced backend elksdev/: Elasticsearch.elasticsearch.k8s.elastic.co \"\" not found","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","type":"Warning","object":{"kind":"Agent","namespace":"elksdev","name":"elastic-agent","uid":"bb539008-0f3c-436a-8b08-a3db8f9b3118","apiVersion":"agent.k8s.elastic.co/v1alpha1","resourceVersion":"321106320"},"reason":"AssociationError"}
{"log.level":"info","@timestamp":"2023-09-15T06:18:17.180Z","log.logger":"agent-es","message":"Starting reconciliation run","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","iteration":"7","namespace":"elksdev","agent_name":"fleet-server"}
{"log.level":"info","@timestamp":"2023-09-15T06:18:17.180Z","log.logger":"agent-es","message":"Ending reconciliation run","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","iteration":"7","namespace":"elksdev","agent_name":"fleet-server","took":0.000148163}
{"log.level":"debug","@timestamp":"2023-09-15T06:18:17.731Z","log.logger":"elasticsearch-observer","message":"Retrieving cluster health","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","es_name":"elk-dev","namespace":"elksdev"}
{"log.level":"debug","@timestamp":"2023-09-15T06:18:17.731Z","log.logger":"elasticsearch-observer","message":"Elasticsearch HTTP request","service.version":"2.7.0+0ef8d5e3","service.type":"eck","ecs.version":"1.4.0","method":"GET","url":"https://elk-dev-es-internal-http.elksdev.svc:9200/_cluster/health","namespace":"elksdev","es_name":"elk-dev"}

Ask:
Is it that we are doing something wrong?
I undertstand the published limitations do not have any role in my case.

Hello Elastic,

Brief of above message:

Fleet server has to communicate with Kibana and Elasticsearch.

When all are present in the same cluster and the end points are not exposed via URLs/UserId/Password/CA.CRT, things are working (i.e. when the components are exposed and referenced as custom resources, fleet server says green).

The moment, I convey Fleet Server to establish connection with Kibana and Elasticsearch via URL/UserId/Password/ca.crt (in the form of secret), the ECK can not establish connection between the components.
ECK says it cannot identify the secret (the same namespace where manifest of fleet-server is present).

Any help will be great to start with. I guess, I am making a simple mistake somewhere.

@elastic

I do not know whether I have not put correct category but I am not getting response...

Any inputs?

vinod

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.