Trouble configuring APM Server on K8 with ECK

Kibana version:

8.11.1

Elasticsearch version:

8.11.1

APM Server version:

8.11.1

APM Agent language and version:

Python

Browser version:

Version 121.0.6167.140 (Official Build) (64-bit)

Original install method (e.g. download page, yum, deb, from source, etc.) and version:

Installed Via YAML Manifest onto K8 v1.24.7

Fresh install or upgraded from other version?

Fresh Install with minor changes (name)

Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc.

Was hoping for a simple setup we are using ELK6 but with K8 I wanted to upgrade us to the newer version. Issue is it's my first time setting up ECK and when logs/data is sent from the Python Apps to ELK which can be viewed on Kibana simaliar to how we had from ELK v6.

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

Was expecting the most simple set up, deploy ssl not needed and just point the apps to the fqdn internally within K8 (svc.cluster.local:8200) and I'd see the agents communticating ECK and be able to see errors or APM data via Kibana, However that is not the case.

Steps to reproduce:
1.Installed the Operator
kubectl create -f https://download.elastic.co/downloads/eck/2.10.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/2.10.0/operator.yaml

  1. Light changes like the name and applied:

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: staging-kibana8
spec:
version: 8.11.1
count: 1
elasticsearchRef:
name: staging-elk8


apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: staging-elk8
spec:
version: 8.11.1
nodeSets:

  • name: default
    count: 1
    podTemplate:
    spec:
    initContainers:
    - name: sysctl
    securityContext:
    privileged: true
    runAsUser: 0
    command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
    volumeClaimTemplates:
    • metadata:
      name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
      spec:
      accessModes:
      • ReadWriteOnce
        resources:
        requests:
        storage: 50Gi
        storageClassName:

apiVersion: apm.k8s.elastic.co/v1
kind: ApmServer
metadata:
name: staging-apmserver8
namespace: default
spec:
version: 8.11.1
count: 1
elasticsearchRef:
name: staging-elk8
kibanaRef:
name: staging-kibana8

  1. I'm able to port forward and login with the default elastic user and password found in the secrets and I've pointed apps at its current fqdn since its being tested on the default ns (staging-elk8-es-http.default.svc.cluster.local:8200)

Provide logs and/or server output (if relevant):

The Python App seems to try connecting to ECK but can't push logging to it
{"timestamp": "2024-02-08T18:57:07.461432Z", "message": "dropping flushed data due to transport failure back-off", "host": "isaac-api-b65c99448-m9z5v", "path": "/home/appuser/.pyenv/versions/3.8.11/lib/python3.8/site-packages/elasticapm/transport/base.py", "tags": [], "level": "ERROR", "logger": "elasticapm.transport", "stack_info": null, "elasticapm_transaction_id": null, "elasticapm_trace_id": null, "elasticapm_span_id": null, "elasticapm_service_name": "staging", "elasticapm_service_environment": null, "elasticapm_labels": {"transaction.id": null, "trace.id": null, "span.id": null, "service.name": "staging", "service.environment": null}}

So far it seems similar to this topic since our Python is also setup using these ENVVars
ELASTIC_APM_SERVICE_NAME , ELASTIC_APM_SERVER_URL

However, Not sure if the solution is the same, Could it be a misconfguration?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.