Elastic APM server http://localhost:8220/ is not available (Server returned status 400)

I cannot connect to APM server using SpringBoot and Tomcat : I run the docker-compose file from GitHub - elastic/apm-server: APM Server , and added the following configuration to my run configuration in Inteliij IDEA :

-javaagent:/xxx/tools/elastic/elastic-apm-agent-1.36.0.jar
-Delastic.apm.service_name=juno-local-dev
-Delastic.apm.server_url=http://localhost:8220
-Delastic.apm.environment=dev
-Delastic.apm.secret_token=
-Delastic.apm.application_packages=com.example.juno

When the server start, I can see this log :

2023-03-14 10:07:09,564 [main] INFO co.elastic.apm.agent.util.JmxUtils - Found JVM-specific OperatingSystemMXBean interface: com.sun.management.OperatingSystemMXBean
2023-03-14 10:07:09,610 [main] INFO co.elastic.apm.agent.util.JmxUtils - Found JVM-specific ThreadMXBean interface: com.sun.management.ThreadMXBean
2023-03-14 10:07:09,641 [main] INFO co.elastic.apm.agent.configuration.StartupInfo - Starting Elastic APM 1.36.0 as juno-local-dev on Java 11.0.18 Runtime version: 11.0.18+10-LTS VM version: 11.0.18+10-LTS (Amazon.com Inc.) Mac OS X 13.2.1
2023-03-14 10:07:09,642 [main] INFO co.elastic.apm.agent.configuration.StartupInfo - service_name: 'juno-local-dev' (source: Java System Properties)
2023-03-14 10:07:09,643 [main] INFO co.elastic.apm.agent.configuration.StartupInfo - environment: 'dev' (source: Java System Properties)
2023-03-14 10:07:09,643 [main] INFO co.elastic.apm.agent.configuration.StartupInfo - server_url: 'http://localhost:8220' (source: Java System Properties)
2023-03-14 10:07:09,643 [main] INFO co.elastic.apm.agent.configuration.StartupInfo - application_packages: 'com.example.juno' (source: Java System Properties)
2023-03-14 10:07:12,870 [main] INFO co.elastic.apm.agent.impl.ElasticApmTracer - Tracer switched to RUNNING state
2023-03-14 10:07:12,997 [elastic-apm-server-healthcheck] WARN co.elastic.apm.agent.report.ApmServerHealthChecker - Elastic APM server http://localhost:8220/ is not available (Server returned status 400)

docker has different access to reach outside the container. In order to access your local machine use host.docker.internal instead of localhost

The Java application is running outside the container, on the IDE.
And the 8220 port is publicly available outside the container (see docker-compose.yml). HTTP 400 means that I reached the fleet endpoint in some way.
Therefore I think this is not the problem.

Hi @Jean-Rene_Robin

Test the endpoint

curl -v -X POST http://localhost:8200/ -H "Authorization: Bearer secret_token"

Is it running on HTTPS? if you installed it with Fleet at the APM integration it probably is...

curl -v -k -X POST https://localhost:8200/ -H "Authorization: Bearer secret_token"

and if it is self signed

I'm sorry (newbie here), there was no error. I eventually realized I did not create an APM server. I was contacting the Fleet Server instead of the APM server. I thought the APM server was embedded in the fleet docker, at that time I had not correctly understood the architecture.

I added an APM docker in the stack, and it worked. Thank you for you answers.

For reference, the docker-compose.yml file (using data from apm-server/testing/docker at main · elastic/apm-server · GitHub stored in a folder called configuration) .

version: '3.9'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.8.0-ba3f07b2-SNAPSHOT
    ports:
      - 9200:9200
    healthcheck:
      test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=500ms"]
      retries: 300
      interval: 1s
    environment:
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
      - "network.host=0.0.0.0"
      - "transport.host=127.0.0.1"
      - "http.host=0.0.0.0"
      - "cluster.routing.allocation.disk.threshold_enabled=false"
      - "discovery.type=single-node"
      - "xpack.security.authc.anonymous.roles=remote_monitoring_collector"
      - "xpack.security.authc.realms.file.file1.order=0"
      - "xpack.security.authc.realms.native.native1.order=1"
      - "xpack.security.enabled=true"
      - "xpack.license.self_generated.type=trial"
      - "xpack.security.authc.token.enabled=true"
      - "xpack.security.authc.api_key.enabled=true"
      - "logger.org.elasticsearch=${ES_LOG_LEVEL:-error}"
      - "action.destructive_requires_name=false"
    volumes:
      - "./configuration/elasticsearch/roles.yml:/usr/share/elasticsearch/config/roles.yml"
      - "./configuration/elasticsearch/users:/usr/share/elasticsearch/config/users"
      - "./configuration/elasticsearch/users_roles:/usr/share/elasticsearch/config/users_roles"
      - "./configuration/elasticsearch/ingest-geoip:/usr/share/elasticsearch/config/ingest-geoip"

  kibana:
    image: docker.elastic.co/kibana/kibana:8.8.0-ba3f07b2-SNAPSHOT
    ports:
      - 5601:5601
    healthcheck:
      test: ["CMD-SHELL", "curl -s http://localhost:5601/api/status | grep -q 'All services are available'"]
      retries: 300
      interval: 1s
    environment:
      ELASTICSEARCH_HOSTS: '["http://elasticsearch:9200"]'
      ELASTICSEARCH_USERNAME: "${KIBANA_ES_USER:-kibana_system_user}"
      ELASTICSEARCH_PASSWORD: "${KIBANA_ES_PASS:-changeme}"
      XPACK_FLEET_AGENTS_FLEET_SERVER_HOSTS: '["https://fleet-server:8220"]'
      XPACK_FLEET_AGENTS_ELASTICSEARCH_HOSTS: '["http://elasticsearch:9200"]'
    depends_on:
      elasticsearch: { condition: service_healthy }
    volumes:
      - "./configuration/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml"

  fleet-server:
    image: docker.elastic.co/beats/elastic-agent:8.8.0-ba3f07b2-SNAPSHOT
    ports:
      - 8220:8220
    healthcheck:
      test: ["CMD-SHELL", "curl -s -k https://localhost:8220/api/status | grep -q 'HEALTHY'"]
      retries: 300
      interval: 1s
    environment:
      FLEET_SERVER_ENABLE: "1"
      FLEET_SERVER_POLICY_ID: "fleet-server-apm"
      FLEET_SERVER_ELASTICSEARCH_HOST: http://elasticsearch:9200
      FLEET_SERVER_ELASTICSEARCH_USERNAME: "${ES_SUPERUSER_USER:-admin}"
      FLEET_SERVER_ELASTICSEARCH_PASSWORD: "${ES_SUPERUSER_PASS:-changeme}"
      FLEET_SERVER_CERT: /etc/pki/tls/certs/fleet-server.pem
      FLEET_SERVER_CERT_KEY: /etc/pki/tls/private/fleet-server-key.pem
      FLEET_URL: https://fleet-server:8220
      KIBANA_FLEET_SETUP: "true"
      KIBANA_FLEET_HOST: "http://kibana:5601"
      KIBANA_FLEET_USERNAME: "${ES_SUPERUSER_USER:-admin}"
      KIBANA_FLEET_PASSWORD: "${ES_SUPERUSER_PASS:-changeme}"
    depends_on:
      elasticsearch: { condition: service_healthy }
      kibana: { condition: service_healthy }
    volumes:
      - "./configuration/fleet-server/certificate.pem:/etc/pki/tls/certs/fleet-server.pem"
      - "./configuration/fleet-server/key.pem:/etc/pki/tls/private/fleet-server-key.pem"

  apm-server:
    image: docker.elastic.co/apm/apm-server:8.6.2
    cap_add: ["CHOWN", "DAC_OVERRIDE", "SETGID", "SETUID"]
    cap_drop: ["ALL"]
    ports:
      - 8200:8200
    command: >
      apm-server -e
        -E apm-server.rum.enabled=true
        -E setup.kibana.host=kibana:5601
        -E setup.template.settings.index.number_of_replicas=0
        -E apm-server.kibana.enabled=true
        -E apm-server.kibana.host=http://kibana:5601
        -E apm-server.kibana.username=${ES_SUPERUSER_USER:-admin}
        -E apm-server.kibana.password=${ES_SUPERUSER_PASS:-changeme}
        -E output.elasticsearch.hosts=["http://elasticsearch:9200"]
        -E output.elasticsearch.username=${KIBANA_ES_USER:-apm_server_user}
        -E output.elasticsearch.password=${KIBANA_ES_PASS:-changeme}
    healthcheck:
      interval: 10s
      retries: 12
      test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:8200/
    depends_on:
      elasticsearch: { condition: service_healthy }
      kibana: { condition: service_healthy }

  metricbeat:
    image: docker.elastic.co/beats/metricbeat:8.8.0-ba3f07b2-SNAPSHOT
    environment:
      ELASTICSEARCH_HOSTS: '["http://elasticsearch:9200"]'
      ELASTICSEARCH_USERNAME: "${KIBANA_ES_USER:-admin}"
      ELASTICSEARCH_PASSWORD: "${KIBANA_ES_PASS:-changeme}"
    depends_on:
      elasticsearch: { condition: service_healthy }
      fleet-server: { condition: service_healthy }
    volumes:
      - "./configuration/metricbeat/elasticsearch-xpack.yml://usr/share/metricbeat/modules.d/elasticsearch-xpack.yml"
      - "./configuration/metricbeat/apm-server.yml://usr/share/metricbeat/modules.d/apm-server.yml"
    profiles:
      - monitoring
1 Like