Dockerizing the entire elastic stack

I am trying to dockerize elasticsearch, kibana, logstash, metricbeat, packetbeat, postgresql and redis

I have a configuration file looking like below

version: '3'
services:
  redis:
    build: ./docker/redis

  postgresql:
    build: ./docker/postgresql
    ports:
      - "5433:5432"
    env_file:
      - .env

  graphql:
    build: .
    command: npm run start
    volumes:
      - ./logs/:/usr/app/logs/
    ports:
      - "3000:3000"
    env_file:
      - .env
    depends_on:
      - "redis"
      - "postgresql"
    links:
      - "redis"
      - "postgresql"

  elasticsearch:
    build: ./docker/elasticsearch
    container_name: elasticsearch
    networks:
      - elastic
    ports:
      - "9200:9200"
    depends_on:
      - "graphql"
    links:
      - "kibana"

  kibana:
    build: ./docker/kibana
    container_name: kibana
    ports:
      - "5601:5601"
    depends_on:
      - "graphql"
    networks:
      - elastic
    environment:
      - ELASTICSEARCH_URL=http://elasticsearch:9200

  metricbeat:
    build: ./docker/metricbeat
    depends_on:
      - "graphql"
      - "elasticsearch"
      - "kibana"
    volumes:
      - /proc:/hostfs/proc:ro
      - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
      - /:/hostfs:ro
    networks:
      - elastic
    environment:
      - ELASTICSEARCH_URL=http://elasticsearch:9200
    command:
      - "-system.hostfs=/hostfs"

  packetbeat:
    build: ./docker/packetbeat
    depends_on:
      - "graphql"
      - "elasticsearch"
      - "kibana"
    cap_add:
      - NET_ADMIN
    networks:
      - elastic
    environment:
      - ELASTICSEARCH_URL=http://127.0.0.1:9200

  logstash:
    build: ./docker/logstash
    ports:
      - "9600:9600"
    volumes:
      - ./logs:/usr/logs
    depends_on:
      - "graphql"
      - "elasticsearch"
      - "kibana"
    networks:
      - elastic
    environment:
      - ELASTICSEARCH_URL=http://elasticsearch:9200

networks:
  elastic:
    driver: bridge

This works perfectly. Everything sets up and talks to each other fine, I am also able to visualize data on Kibana. However, the problem is that from the documentation in elastic stack https://www.elastic.co/guide/en/beats/packetbeat/master/running-on-docker.html - if I run with bridge network, I can only get information that passes through the packetbeat container itself (is this right?). So I have tried adding -network:host to packetbeat and remove network:elastic so that it can collect all the information that comes and goes from the host machine. However, then I am not able to have packetbeat pass health check for http://elasticsearch:9200 because it is not in the same network anymore.

Also, my graphql server container is talking to redis and postgresql through links url - so graphql references the url target as http://redis and http://postgresql. In this case, how can I make sure that networkbeats is also listening to the correct port so that I can collect network data from these two as well? Even though I have opened up port 5433 for postgresql if I link graphql with postgresql, and use postgresql as the DNS name it doesn't use 5433 port for communication does it?

Also final question, how can I make sure that default dashboards are setup for metricbeat and packetbeat when the container starts? I have tried things that would work in ordinary system with no success like below.

FROM docker.elastic.co/beats/metricbeat:6.3.2

CMD ["./metricbeat", "setup", "--dashboards"]

You can set setup.dashboards.enabled to true in your configuration file. So dashboards are loaded when your Beat starts.

Elastic has a project that runs everything with docker-compose. Take a look at how it sets up the dashboards for each Beat. GitHub - elastic/stack-docker: Project no longer maintained.

Right. You'd need to run it in host mode to see everything to/from the host. This is what the elastic/stack-docker project does. And in order for Packetbeat to communicate to ES you need to expose 9200 from the ES container such that ES is reachable by Packetbeat as http://localhost:9200.

1 Like

Hi, thanks for the reply.

I trying out the elastic docker-compose and customize it to my need.

I have one more question though.

If I am running postgresql in a docker and exposing 5433 port to the host, would I be able to get all the metrics information through beats if I listen to 5433 port at localhost? (my main graphql server connects through "link" rather than localhost:5433) If so, how to I configure the packetbeat.yml file so that it not only has ports defined but also the DNS? Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.