APM-server connecting to elasticsearch even though my output is only enabled for kafka

Hi ,
I'm using APM-server 7.1.2 . I have only enabled output.kafka and rest all output is disabled.

Its running in kubernetes. APM-server is restarting every few minutes and i see following logs '

2019-07-04T16:49:56.588+0530 INFO [onboarding] beater/onboarding.go:36 Publishing onboarding document
2019-07-04T16:49:57.589+0530 INFO pipeline/output.go:95 Connecting to backoff(elasticsearch(http://localhost:9200))
2019-07-04T16:49:59.189+0530 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get http://localhost:9200: dial tcp 127.0.0.1:9200: connect: connection refused
2019-07-04T16:49:59.189+0530 INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 1 reconnect attempt(s)
2019-07-04T16:50:01.537+0530 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get http://localhost:9200: dial tcp 127.0.0.1:9200: connect: connection refused
2019-07-04T16:50:01.537+0530 INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 2 reconnect attempt(s)
2019-07-04T16:50:09.484+0530 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get http://localhost:9200: dial tcp 127.0.0.1:9200: connect: connection refused
2019-07-04T16:50:09.484+0530 INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 3 reconnect attempt(s)
2019-07-04T16:50:18.365+0530 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get http://localhost:9200: dial tcp 127.0.0.1:9200: connect: connection refused
2019-07-04T16:50:18.365+0530 INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 4 reconnect attempt(s)
2019-07-04T16:50:26.511+0530 INFO [request] beater/common_handler.go:185 handled request {"request_id": "0b8bb307-8978-4135-81a6-46435c2118db", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "127.0.0.1", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36", "response_code": 200}
2019-07-04T16:50:38.748+0530 INFO [beater] beater/beater.go:299 stopping apm-server... waiting maximum of 5 seconds for queues to drain
2019-07-04T16:50:38.748+0530 INFO [beater] beater/beater.go:206 Server stopped
2019-07-04T16:50:42.779+0530 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get http://localhost:9200: dial tcp 127.0.0.1:9200: connect: connection refused
2019-07-04T16:50:42.779+0530 INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 5 reconnect attempt(s)
2019-07-04T16:50:43.749+0530 INFO instance/beat.go:401 apm-server stopped.

Kibana version:

Elasticsearch version: 7.x

APM Server version: 7.1.2

APM Agent language and version:

Browser version:

Original install method (e.g. download page, yum, deb, from source, etc.) and version:

Fresh install or upgraded from other version?

Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc.

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

Steps to reproduce:

  1. I have only enabled output.kafka and rest all output is disabled.

Errors in browser console (if relevant):

Provide logs and/or server output (if relevant):

Hi and welcome to the APM forum :slight_smile:
Please provide the exact configuration you are using.

Hi, that log is not indicating that the apm-server restarts, but connection retries to elasticsearch. It looks like your expected configuration is not applied.

As my colleague Eyal says above, the whole configuration you are using would help to pinpoint the problem, but keep in mind that you explicitly need to disable the elasticsearch output, eg:

./apm-server -e -E output.elasticsearch.enabled=false -E output.kafka.enabled=true

Juan

Thanks you.

Also I don't see logs coming in the configured folder. I have enabled logging with 0644 permission .

Is this related to your other discuss post or a separate issue? Providing more informatiotn help us help you better:

  • What is the full apm-server.yml configuration (format by surrrounding with triple backticks ```)
  • What is the full apm-server log
  • How is apm-server started? - you mention kuberrnetes, are you using a custom image?

yes. im using kubernetes.

################### APM Server Configuration #########################

############################# APM Server ######################################

apm-server:
  # Defines the host and port the server is listening on.  use "unix:/path/to.sock" to listen on a unix domain socket.
  host: "0.0.0.0:8200"

  # Maximum permitted size in bytes of a request's header accepted by the server to be processed.
  #max_header_size: 1048576

  # Maximum permitted duration for reading an entire request.
  #read_timeout: 30s

  # Maximum permitted duration for writing a response.
  #write_timeout: 30s

  # Maximum duration in seconds before releasing resources when shutting down the server.
  #shutdown_timeout: 5s

  # Maximum allowed size in bytes of a single event
  #max_event_size: 307200

  #--

  # Maximum number of new connections to accept simultaneously (0 means unlimited)
  # max_connections: 0

  # Authorization token to be checked. If a token is set here the agents must
  # send their token in the following format: Authorization: Bearer <secret-token>.
  # It is recommended to use an authorization token in combination with SSL enabled,
  # and save the token in the apm-server keystore.
  #secret_token:

  # Enable secure communication between APM agents and the server. By default ssl is disabled.
  ssl.enabled: true
  ssl.certificate : "cert.pem"
  ssl.key : "key.pem"
  # It is recommended to use the provided keystore instead of entering the passphrase in plain text.
  #ssl.key_passphrase: ""

  
#================================ General ======================================

# Internal queue configuration for buffering events to be published.
#queue:
  # Queue type by name (default 'mem')
  # The memory queue will present all available events (up to the outputs
  # bulk_max_size) to the output, the moment the output is ready to server
  # another batch of events.
  #mem:
    # Max number of events the queue can buffer.
    #events: 4096

    # Hints the minimum number of events stored in the queue,
    # before providing a batch of events to the outputs.
    # The default value is set to 2048.
    # A value of 0 ensures events are immediately available
    # to be sent to the outputs.
    #flush.min_events: 2048

    # Maximum duration after which events are available to the outputs,
    # if the number of events stored in the queue is < min_flush_events.
    #flush.timeout: 1s

# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:


#============================== Template =====================================

# A template is used to set the mapping in Elasticsearch
# By default template loading is enabled and the template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones.

# Set to false to disable template loading.
#setup.template.enabled: true

# Template name. By default the template name is "apm-%{[observer.version]}"
# The template name and pattern has to be set in case the elasticsearch index pattern is modified.
#setup.template.name: "apm-%{[observer.version]}"

# Template pattern. By default the template pattern is "apm-%{[observer.version]}-*" to apply to the default index settings.
# The first part is the version of apm-server and then -* is used to match all daily indices.
# The template name and pattern has to be set in case the elasticsearch index pattern is modified.
#setup.template.pattern: "apm-%{[observer.version]}-*"

# Path to fields.yml file to generate the template
#setup.template.fields: "${path.config}/fields.yml"

# Overwrite existing template
#setup.template.overwrite: false

# Elasticsearch template settings
#setup.template.settings:

  # A dictionary of settings to place into the settings.index dictionary
  # of the Elasticsearch template. For more details, please check
  # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html
  #index:
    #number_of_shards: 1
    #codec: best_compression
    #number_of_routing_shards: 30
    #mapping.total_fields.limit: 2000

  # A dictionary of settings for the _source field. For more details, please check
  # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html
  #_source:
    #enabled: false


#============================= Elastic Cloud ==================================

# These settings simplify using APM Server with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` option.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by apm-server.

#------------------------------- Kafka output ----------------------------------
output.kafka:
  # Boolean flag to enable or disable the output module.
  enabled: true

  # The list of Kafka broker addresses from where to fetch the cluster metadata.
  # The cluster metadata contain the actual Kafka brokers events are published
  # to.
  hosts: ["kafka:9092"]

  # The Kafka topic used for produced events. The setting can be a format string
  # using any event field. To set the topic from document type use `%{[type]}`.
  topic: '%{[processor.event]}'

  # The Kafka event key setting. Use format string to create unique event key.
  # By default no event key will be generated.
  key: '%{[transaction.id]}'

#================================ Logging ======================================
logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/apm-server
  name: apm-server
  keepfiles: 7
  permissions: 0644

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.