Unable to get Metricbeat to communicate with Elasticsearch

I'm working on upgrading the Elastic stack to the current version for my company. So, I'm testing the deployment of Elasticsearch, Kibana, Logstash, and Metricbeat. Each component is in a separate docker container but all on the same host. I believe I've successfully configured the first three components, and now I'm attempting to get Metricbeat to communicate with Elasticsearch. Unfortunately, I receive one of two errors no matter what I do.

  1. Error fetching data for metricset elasticsearch.shard: error determining if connected Elasticsearch node is master: error making http request: Get \"http://localhost:9200/_nodes/_local/nodes\": dial tcp [::1]:9200: connect: cannot assign requested address
  2. Failed to connect to backoff(elasticsearch( Get \"\

I've reviewed this post, and if I disable the elasticsearch-xpack while leaving metricbeat.yml as it is, I receive the second error above.

At this point, I'm running in circles. So, any assistance would be most appreciated.

What version?

How did you install elasticsearch?

What are you trying to accomplish with metricbeat send system / host metrics or monitor elasticsearch?

Try running this and show the output.

metricbeat test output

Please share the output section or your metricbeat.yml

Hi Stephen, thanks for your reply.

I pulled the standard Elastic docker containers. They are all 8.10.4

Below is information returned from metricbeat test output.

  parse url... OK
    parse host... OK
    dns lookup... OK
    dial up... OK
    security... WARN server's certificate chain verification is disabled
    handshake... OK
    TLS version: TLSv1.3
    dial up... OK
  talk to server... OK
  version: 8.10.4

from metricbeat.yml

  hosts:  [""]
  username: <removed for security>
  password: <removed for security>
    enabled: true
    verification_mode: none
    ca_trusted_fingerprint: "AD553FA4B1A62094E49CF061421059379ABCCF671493A8F6759DF52D45E30E77"

Hi @james_fourth So that is good so far.

BTW if that is a valid finger print you can take out / comment out

The post you looked at is incredibly old... so I would not necessarily use that.

So, back to what are you trying to collect / accomplish with metricbeat did you enable any modules?

It looks like you enabled the elasticsearch moule or elasticsearch-xpack module... those are used to monitor elasticsearch... is that what you are trying to do?

Thanks for the tip about the fingerprint, and I’m glad to hear the output of the test looks as good to you as it did to me. I also wondered about the age of the post I referenced, and I hadn’t seen a more recent post that appeared as relevant to the issue I’m seeing.

You are correct. I am trying to monitor elasticsearch.

The Same Elasticsearch or a different one.

You should read

The same elasticsearch. I’ve read that doc, but maybe I could read through it more carefully.

Post your elasticsearch-xpack.yml

NOTE do not have both


Enabled they will overwrite each other

you can also run

metricbeat test config

Your error shows

Get \"http://localhost:9200/_nodes/_local/nodes

But your elasticsearch is running on https so you need to use https in the elasticsearch-xpack and provide the CA just like your output section

  - module: elasticsearch
    xpack.enabled: true
    period: 10s
    hosts:  [""]
    username: <removed for security>
    password: <removed for security>
      enabled: true
      ca_trusted_fingerprint: "AD553FA4B1A62094E49CF061421059379ABCCF671493A8F6759DF52D45E30E77"


- module: elasticsearch
  xpack.enabled: true
  period: 10s
  hosts: [""]

with the remote_monitoring_user credentials below the hosts array.

The elasticsearch module isn't enabled, only elasticsearch-xpack.

When I'm interacting with the metricbeat container, I don't see a filebeat executable available. Is that something I'm missing?

After removing the verification_mode line, I receive a handshake error after executing metricbeat test output saying certificate is valid for,, not Updated metricbeat.yml to refer to localhost instead of, and now the dial up test returns an error saying dial tcp connect: connection refused.

elasticsearch: https://localhost:9200...
  parse url... OK
    parse host... OK
    dns lookup... OK
    addresses:, ::1
    dial up... ERROR dial tcp connect: connection refused

Updated the elasticsearch-xpack.yml to have localhost instead of and restarted the container. However, metricbeat test output now returns a dial up error of dial tcp [::1]:9200: connect: cannot assign requested address

So in short just put the exact same connection credential in the elasticsearch-xpack module... first and try that and try with the elastic super user.

You have a number of things that are causing issues.

OK so you need to put this back in

verification_mode: none

Because you are using different IPs to access elasticsearch than what was used when you created the self signed cert. Thus there is a certificate mismatch.... you can fix that later.

Looks like your elasticsearch is not actually running on localhost / loopback because it has been bound to a network interface.... on your host these are separate interfaces

Are you sure that use is setup correctly first I would try with the elastic superuser then try the remote_monitoring_user

I do not know what you mean ... clearly you are running

metricbeat test output

Restored verification_mode: none

docker ps outputs the following

aef6b5ab2b24        docker.elastic.co/beats/metricbeat:8.10.4              "/usr/bin/tini -- /u…"   20 hours ago        Up 7 minutes                                                                   mb01
6ca05c0d6919        docker.elastic.co/logstash/logstash:8.10.4             "/usr/local/bin/dock…"   2 days ago          Up 21 hours>5400/tcp, 5044/tcp,>9600/tcp   log01
3d68776452cb        docker.elastic.co/kibana/kibana:8.10.4                 "/bin/tini -- /usr/l…"   2 days ago          Up 21 hours>5601/tcp                                     kib01
f9fc41fa6724        docker.elastic.co/elasticsearch/elasticsearch:8.10.4   "/bin/tini -- /usr/l…"   2 days ago          Up 21 hours>9200/tcp, 9300/tcp                           es01

So, does that mean I should use for everything?

I was previously using the elastic superuser, I'll revert to using those credentials.

I'm not sure the remote_monitoring_user is set up correctly. I assumed it was configured by default for this kind of operation.

I can execute metricbeat test output, but if I try filebeat test output as mentioned a couple replies back, the command isn't found. Maybe that was a typo?

Still receiving connection refused with the elastic superuser, verification_mode: none, and an address of

Sorry my typo I meant metricbeat ... fixed

No Clue I don't know how you are running these in Compose with Shared Network or individual containers..swarm... etc...?? The ips / what to bind too... you need to understand your environment, This is probably more of a Docker / Networking thing...

Are you following some 3rd party article?

the roles are created but not the user, I am unclear what you have done... I am only getting pieces of info ...

Pardon me for not being comprehensive. I'm not using compose or swarm. I'm simply following the Elastic docs pertaining to Docker for each of the Elastic components and using the commands the docs contain. When pulling the containers and using them as default, there are already a collection of users with roles assigned.

Ok so when running separate docker containers each container is its own localhost so referencing localhost in the metricbeat container is not the localhost in the elasticsearch container.

This is docker stuff.

Perhaps look at

So when you want one container to communicate/connect with another container you need to use either the IP of the host or use


Thanks for the reminder of a docker container being a system unto itself. With that in mind, of course localhost would refer to the container and not the host environment.

Unfortunately, I'm using a Oracle Linux 7.9 as the host and the most recent version of docker available for this distro is 19.03. That means host.docker.internal isn't available. I attempted to use the host IP of in the elasticsearch-xpack.yml and metricbeat.yml files, but I'm still receiving errors saying connection refused and error determining if connected Elasticsearch node is master.

How many elasticsearch nodes? Are you running just one ?

The connection refused is the key message after that all the rest of them are just error messages when it can't connect so obviously it can't determine the node type.

So how you debug ....,exec into the metricbeat container and try to curl the the elasticsearch using the same URL and credentials that you're using in the xpack

What credentials are you using in the X pack?

I'm running a single Elasticsearch node. I'm using the elastic superuser credentials in the xpack modules.

I think I found the mistake. I hadn't added the ssl section to the elasticsearch-xpack like this...

      enabled: true
      verication_mode: none
      ca_trusted_fingerprint: "AD553FA4B1A62094E49CF061421059379ABCCF671493A8F6759DF52D45E30E77"

Now I'm seeing successful communication with Elasticsearch!

{"log.level":"info","@timestamp":"2023-11-10T01:32:59.859Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":187},"message":"Non-zero metrics in the last 30s","service.name":"metricbeat","monitoring":{"metrics":{"beat":{"cgroup":{"cpuacct":{"total":{"ns":221665564}},"memory":{"mem":{"usage":{"bytes":53698560}}}},"cpu":{"system":{"ticks":230,"time":{"ms":60}},"total":{"ticks":860,"time":{"ms":220},"value":860},"user":{"ticks":630,"time":{"ms":160}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":33},"info":{"ephemeral_id":"1889797d-aa35-4b20-aa5c-675ecf22f8bf","uptime":{"ms":90113},"version":"8.10.4"},"memstats":{"gc_next":29871416,"memory_alloc":18787440,"memory_total":103813872,"rss":120987648},"runtime":{"goroutines":165}},"libbeat":{"config":{"module":{"running":3}},"output":{"events":{"acked":246,"active":0,"batches":15,"duplicates":108,"total":354},"read":{"bytes":109852},"write":{"bytes":689262}},"pipeline":{"clients":23,"events":{"active":1,"published":354,"total":354},"queue":{"acked":354}}},"metricbeat":{"docker":{"container":{"events":12,"success":12},"cpu":{"events":12,"success":12},"diskio":{"events":12,"success":12},"info":{"events":3,"success":3},"memory":{"events":12,"success":12},"network":{"events":12,"success":12}},"elasticsearch":{"cluster_stats":{"events":3,"success":3},"enrich":{"events":3,"success":3},"index":{"events":105,"success":105},"index_recovery":{"events":45,"success":45},"index_summary":{"events":3,"success":3},"node_stats":{"events":3,"success":3},"shard":{"events":108,"success":108}},"kibana":{"cluster_actions":{"events":3,"failures":3},"cluster_rules":{"events":3,"failures":3},"node_actions":{"events":3,"failures":3},"node_rules":{"events":3,"failures":3},"stats":{"events":3,"failures":3}},"logstash":{"node":{"events":3,"failures":3},"node_stats":{"events":3,"failures":3}}},"system":{"load":{"1":0.01,"15":0.15,"5":0.13,"norm":{"1":0.005,"15":0.075,"5":0.065}}}},"ecs.version":"1.6.0"}}

I'm also trying to monitor Kibana and Logstash, and there are still errors from those two like the following...

{"log.level":"error","@timestamp":"2023-11-10T01:32:59.873Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset logstash.node_stats: error making http request: Get \"\": http: server gave HTTP response to HTTPS client","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-11-10T01:32:59.874Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kibana.cluster_rules: error making http request: Get \"\": http: server gave HTTP response to HTTPS client","service.name":"metricbeat","ecs.version":"1.6.0"}

That said, it does appear progress is being made because I can see Elasticsearch monitored with Metricbeat in Stack Monitoring.

Thank you for your patience and attention.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.