APM not able to connect via Fleet Integration

Kibana version: 8.3.1

Elasticsearch version: 8.3.1

APM Server version: 8.3.1

Original install method (e.g. download page, yum, deb, from source, etc.) and version:

Ubuntu 20.04 deb package

Fresh install or upgraded from other version?

This is a fresh install trying to use Fleet

Is there anything special in your setup?

There isn't anything special in our setup. We are using a self-hosted solution to monitor our kubernetes cluster

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

The fleet server seems to be set up correctly. It is collecting logging and metric data from kubernetes.

Steps to reproduce:

  1. Followed the quick-start directions to set up fleet
  2. Followed the quick-start directions to set up apm

Errors in browser console (if relevant):

Application Log Error:
{"version":"1.6.0"},"message":"APM Server transport error: intake response timeout: APM server did not respond within 10s of gzip stream finish"}

If I try to curl http://ip.ip.ip.ip:8200 it just times out
curl http://0.0.0.0:8200 and curl http://localhost:8200 get connection refused as I would expect.

lsof -i :8200 does show the apm-server listening

I'm afraid I can't provide much help with the information you provided. Could you share the agent's fleet policy? Fleet > Agent policies > $POLICY_NAME > Actions > View policy

Please make sure to scrub any sensitive information, such as secret_token, URLs or API keys.

Certainly. Apologies for being incomplete. The elastic suite isn't my strongest skill set and with so much changing in version 8 I feel even more out of my element.

id: 60b2da60-fd39-11ec-aceb-5dea733952f7
revision: 26
outputs:
  default:
    type: elasticsearch
    hosts:
      - 'https://[scrubbed]:9200'
    ssl.ca_trusted_fingerprint: [scrubbed]
output_permissions:
  default:
    _elastic_agent_monitoring:
      indices:
        - names:
            - logs-elastic_agent.apm_server-default
          privileges: &ref_0
            - auto_configure
            - create_doc
        - names:
            - metrics-elastic_agent.apm_server-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent.filebeat-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent.fleet_server-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent.metricbeat-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.cloudbeat-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.metricbeat-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent.auditbeat-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent.endpoint_security-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.elastic_agent-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.auditbeat-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent.cloudbeat-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.filebeat-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.heartbeat-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent.osquerybeat-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent.heartbeat-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.fleet_server-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.endpoint_security-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.packetbeat-default
          privileges: *ref_0
        - names:
            - metrics-elastic_agent.osquerybeat-default
          privileges: *ref_0
        - names:
            - logs-elastic_agent.packetbeat-default
          privileges: *ref_0
    _elastic_agent_checks:
      cluster:
        - monitor
    08aa9adf-9139-4c8e-b04e-3a007f82d86f:
      indices: []
    4be7f110-9a08-44d7-880a-1914a2abbaf4:
      indices:
        - names:
            - logs-kubernetes.container_logs-default
          privileges: *ref_0
    cb01746b-870d-42ee-a969-25f78fe1c6f2:
      indices: []
    90579f48-bc42-4cd8-bccc-57714f42a72d:
      indices:
        - names:
            - logs-apm.app-default
          privileges: *ref_0
        - names:
            - metrics-apm.app.*-default
          privileges: *ref_0
        - names:
            - logs-apm.error-default
          privileges: *ref_0
        - names:
            - metrics-apm.internal-default
          privileges: *ref_0
        - names:
            - metrics-apm.profiling-default
          privileges: *ref_0
        - names:
            - traces-apm.rum-default
          privileges: *ref_0
        - names:
            - traces-apm.sampled-default
          privileges:
            - auto_configure
            - create_doc
            - maintenance
            - monitor
            - read
        - names:
            - traces-apm-default
          privileges: *ref_0
agent:
  monitoring:
    enabled: true
    use_output: default
    namespace: default
    logs: true
    metrics: true
inputs:
  - id: filestream-container-logs-4be7f110-9a08-44d7-880a-1914a2abbaf4
    name: kubernetes-all
    revision: 3
    type: filestream
    use_output: default
    meta:
      package:
        name: kubernetes
        version: 1.21.1
    data_stream:
      namespace: default
    streams:
      - id: >-
          filestream-kubernetes.container_logs-4be7f110-9a08-44d7-880a-1914a2abbaf4
        data_stream:
          dataset: kubernetes.container_logs
          type: logs
        prospector.scanner.symlinks: true
        paths:
          - '/var/log/containers/*${kubernetes.container.id}.log'
        parsers:
          - container:
              stream: all
              format: auto
  - id: 90579f48-bc42-4cd8-bccc-57714f42a72d
    name: apm-integration
    revision: 12
    type: apm
    use_output: default
    meta:
      package:
        name: apm
        version: 8.3.0
    data_stream:
      namespace: default
    apm-server:
      capture_personal_data: true
      max_connections: 0
      max_event_size: 307200
      auth:
        api_key:
          enabled: false
          limit: null
        anonymous:
          enabled: true
          allow_agent:
            - rum-js
            - js-base
            - iOS/swift
          allow_service: null
          rate_limit:
            ip_limit: 1000
            event_limit: 300
        secret_token: null
      default_service_environment: null
      shutdown_timeout: 30s
      sampling:
        tail:
          enabled: false
          policies:
            - sample_rate: 0.1
          interval: 1m
      rum:
        enabled: false
        exclude_from_grouping: ^/webpack
        allow_headers: null
        response_headers: null
        library_pattern: node_modules|bower_components|~
        allow_origins:
          - '*'
        source_mapping:
          metadata: []
      ssl:
        enabled: false
        key_passphrase: null
        certificate: null
        supported_protocols:
          - TLSv1.0
          - TLSv1.1
          - TLSv1.2
        curve_types: null
        key: null
        cipher_suites: null
      response_headers: null
      write_timeout: 30s
      pprof.enabled: false
      host: '[scrubbed]:8200'
      max_header_size: 1048576
      idle_timeout: 45s
      expvar.enabled: false
      read_timeout: 3600s
      java_attacher:
        enabled: false
        discovery-rules: null
        download-agent-version: null
      agent_config: []
fleet:
  hosts:
    - 'https://[scrubbed]:8220'

@tsbayne Thanks or sharing the agent policy.

I see you've scrubbed the host where APM Server is configured to listen on.

By default the APM Integration is configured to listen on localhost:8200 which will cause the APM Server to only serve requests coming from within the machine. I just want to make sure that you've configured the host to listen in the IP address or all IP addresses which are reachable from outside the machine.

you could set it to :8200 or 0.0.0.0:8200 to tell the APM Server to listen in all interfaces.

Yes, the scrubbed portion is an IP.

I discovered what I was doing wrong and will post my stupidity here in case someone else can learn from it.

I was not applying the APM-server integration to the correct policy. The policy used by the elastic server was different.

@tsbayne: Is it ok if I join you in your stupidity?
@marclop Thank you. The solution with 0.0.0.0:8200 worked for me. I was deploying Elastic Stack with ECK and rolling out APM server with fleet. It wasn't responding to request from other pods with localhost:8200.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.