Why Oh Why Oh Why will Metricbeat failing to connect to elasticsearch?

I have just installed elasticsearch, kibana and logstash. This is my first installation so please be kind.

ES, Kibana, Logstash and metricbeat are all v5.4.2
OS Centos 6.9
metricbeat installed from RPM
ES,Kibana, Logstash installed from tar.gz

I can login to Kibana and I have ran a couple of the tutorials for Bank, logstash etc and I can see information there.

Last night I installed metricbeat but it is unable to connect to elasticsearch. I get the following error spewed out in the metricbeat logfile

{code}
2017-06-26T18:38:14+01:00 ERR Connecting error publishing events (retrying): Get https://localhost:9200: dial tcp 127.0.0.1:9200: getsockopt: connection refused
2017-06-26T18:38:40+01:00 INFO No non-zero metrics in the last 30s
{code}

I have stuck the first 300 log file lines up on pastebin https://pastebin.com/gak0C48E

Anyone able to point me in the right direction?

Looks like you enabled the "https" protocol but Elasticsearch only serves HTTP. How does your metricbeat.yml look like?

###################### Metricbeat Configuration Example #######################

# This file is an example configuration file highlighting only the most common
# options. The metricbeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/metricbeat/index.html

#==========================  Modules configuration ============================
metricbeat.modules:

#------------------------------- System Module -------------------------------
- module: system
  metricsets:
    # CPU stats
    - cpu

    # System Load stats
    - load

    # Per CPU core stats
    - core

    # IO stats
    - diskio

    # Per filesystem stats
    - filesystem

    # File system summary stats
    - fsstat

    # Memory stats
    - memory

    # Network stats
    - network

    # Per process stats
    - process

    # Sockets (linux only)
    - socket
  enabled: true
  period: 10s
  processes: ['.*']



#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:BRAVO

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  protocol: "https"
  username: "user"
  password: "blablabla"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

Yes, so you have https enabled:

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  protocol: "https"
  username: "user"
  password: "blablabla"

You can either comment out the protocol line, or make sure HTTPS is enabled on the Elasticsearch side. Do you use xpack for TLS?

Hi Tudor,

I am literally 2 days into ES, Logstash and Kibana. I have installed x-pac but have no Idea of how i get it working for me.

I have an nginx reverse proxy going connecting to kibana so i can access elastic search. nginix has my ssl config. I dont think I have set up TLS in elasticsearch.yml but im off to check.

My elasticsearch.yml looks like the following

======================== Elasticsearch Configuration =========================

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

cluster.name: bravo-cluster

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: bravo-node-1

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: localhost

Set a custom port for HTTP:

http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

#discovery.zen.ping.unicast.hosts: ["host1", "host2"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

#discovery.zen.minimum_master_nodes: 3

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

Then it sounds like you want to comment out the authentication part in metricbeat.yml, so that it reads like this:

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "user"
  #password: "blablabla"

BOOM! And it looks like I have data.

Thanks alot Tudor. I will now go get that working in Packetbeat.

I just commented out the protocol in the Optional Protocol part.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.