Run metricbeat in docker


#1

Hi, i am running elk with docker-compose, i get status green in elasticsearch and i can also login to kibana.
When i run "sudo ./metricbeat -e -c metricbeat.yml" from the tar.gz i downloaded, i get an index in elasticsearch and i can import the dashboard in kibana. The problem is when i try to run metricbeat with docker, no index is created and i get this error in logs:

"ERROR pipeline/output.go:91 Failed to connect: Get http://elasticsearch:9200: lookup elasticsearch on 127.0.0.11:53: no such host".

Thanks.


(Mark Walkom) #2

I'm no docker expert, but it looks like it's trying to resolve the hostname elasticsearch via local lookup and it cannot.

Where does Elasticsearch run?


#3

I do "docker-compose up" from this elk stack i downloaded:

And then in another directory, i have a Dockerfile, a metricbeat.yml and a docker-compose.yml and i do "docker-compose up" as well.

Here are my files for metricbeat:

Dockerfile

FROM docker.elastic.co/beats/metricbeat:6.4.0
COPY metricbeat.yml /usr/share/metricbeat/metricbeat.yml
USER root 
RUN chown root /usr/share/metricbeat/metricbeat.yml

docker-compose.yml

version: "3.6"

services:
  metricbeat:
    build: .
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command: -e 
    user: root

metricbeat.yml

#==========================  Modules configuration ============================

metricbeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

(Toby McLaughlin) #4

In deviantony's Compose file, Elasticsearch is running in a dedicated network called "elk". When you docker-compose up, Docker will create a network named $DIR_elk, where $DIR is your current directory. On my system, I'm in a directory called tmp, so:

$ docker network list
NETWORK ID          NAME                DRIVER              SCOPE
9718f7a3939d        bridge              bridge              local
c9385469dfc3        host                host                local
5c51cc95c6a9        none                null                local
efbd40c66d2c        tmp_elk             bridge              local

You need to arrange for your Metricbeat container to be in that network too, so that it can resolve the hostname elasticsearch and reach the Elasticsearch container on port 9200.


(Toby McLaughlin) #5

Alternatively, you could map port 9200 from the Elasticsearch container to the host system, which the stack Compose also seems to do and then connect Metricbeat to the host network with network_mode: host.

With this approach, Metricbeat can be configured to talk to localhost (like you have now), because localhost is now the actual host machine and Elasticsearch has been bound to port 9200 on the host.

In your current configuration, localhost is just the Metricbeat container, so it's trying to talk to itself.


#6

It works perfectly after i specified the remote network. However i did it using docker stack, i couldn't make it work with docker compose for some reason. Anyway it is now working. Thanks a lot for helping me.


(system) #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.