Can't to get a data from filebeats

Hello, I'm trying setup elk + filebeats. It's my first time. If I right understand, It can show me filtered logs and any metrics.
I install it in docker (with docker-compose) and now I have containers: elasticsearch, logstash, kibana, filebeat. All works but I cant see data in Kibana.
docker logs [filebeat]

How can I get a CPU load, memnory etc?

What metrics do you want to see?
Do you have X-Pack monitoring enabled?

How can I do it? I was try add to Dockerfile of elasticsearch.
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.0

RUN bin/elasticsearch-plugin install x-pack

But I got
Building elasticsearch
Step 1/2 : FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.0
---> a13ec2fa6275
Step 2/2 : RUN bin/elasticsearch-plugin install x-pack
---> Running in c07ca5b8ecb3
ERROR: X-Pack is not available with the oss distribution; to use X-Pack features use the default distribution

To use the X-Pack enabled version of the Elasticsearch image, just remove -oss from the image name. Then you'll get the standard distribution that includes X-Pack.

1 Like

Looking at your original question, I'm guessing that you would like to collect system metrics from your various hosts. Is that right?

If so, you might want to add Metricbeat to your setup.

1 Like

I was start it, but have some troubles. after adding index metricbeat-* I go to Discover and got error image
In dashboards I have list, but a lot of brokenimage

and other is empty Visualise is NoData too.

It looks like you are making progress. It's hard to help without seeing your Docker Compose file. Please share it if you'd like to.

You can also try our comprehensive Compose example over here. It might contain some ideas that help.

version: '2'

services:

  elasticsearch:
    build:
      context: elasticsearch/
    volumes:
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk

  logstash:
    build:
      context: logstash/
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
    ports:
      - "5000:5000"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
    volumes:
      - ./kibana/config/:/usr/share/kibana/config:ro
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch
  filebeat:
    hostname: filebeat
# ** Here to build the image, you need to specify your own docker hub account :
#    image: bcoste/filebeat:latest
    build:
      context: ./filebeat
    volumes:
# needed to persist filebeat tracking data :
     - "filebeat_data:/usr/share/filebeat/data:rw"
# needed to access all docker logs (read only) :
     - "/var/lib/docker/containers:/usr/share/dockerlogs/data:ro"
# needed to access additional informations about containers
     - "/var/run/docker.sock:/var/run/docker.sock"
     - "/var/log:/var/log"
    networks:
     - elk
  metricbeat:
    image: docker.elastic.co/beats/metricbeat:6.4.0
    # https://github.com/docker/swarmkit/issues/1951
    hostname: "metricbeat"
    user: root
    networks:
      - elk
    volumes:
      - ./metricbeat/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml
      - /proc:/hostfs/proc:ro
      - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
      - /:/hostfs:ro
      - /var/run/docker.sock:/var/run/docker.sock
      - ./metricbeat:/usr/share/metricbeat/data
    environment:
      - ELASTICSEARCH_HOST=elasticsearch
      - KIBANA_HOST=kibana
    # disable strict permission checks
    command: ["--strict.perms=false", "-system.hostfs=/hostfs"]
networks:
  elk:
    driver: bridge
volumes:
# create a persistent volume for Filebeat
  filebeat_data:

each Dockerfile looks like as
FROM docker.elastic.co/elasticsearch/elasticsearch:6.4.0

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.