Importing dashboards on Elasicsearch

Hello everyone,

i'm just searching for a bit of help regarding on how to import my dashboards through filebeat into ELK running with docker in my computer. When using normal ELk (without docker), I imported my dashboards with the following command in the terminal:

./filebeat -c filbeat_dashboards.yml

1.
This is how it looks the yml file:
---------------------------------------------------------------------------------------------------
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
setup.dashboards.directory: ./h_dashboard/

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id: "hmm3"

-------------------------------------------------------------------------------------------------
2.
Now, this is how it looks my docker-compose file :
-----------------------------------------------------------------------------------------------------
version: "2.1"
services:
  # The environment variable "ELASTIC_VERSION" is used throughout this file to
  # specify the version of the images to run. The default is set in the
  # '.env' file in this folder. It can be overridden with any normal
  # technique for setting environment variables, for example:
  #
  #   ELASTIC_VERSION=5.5.1 docker-compose up
  #
  # Additionally, the user can control:
  #   * the total memory assigned to the ES container through the variable ES_MEM_LIMIT e.g. ES_MEM_LIMIT=2g
  #   * the memory assigned to the ES JVM through the variable ES_JVM_HEAP e.g. ES_JVM_SIZE=1024m
  #   * the password used for the elastic, logstash_system and kibana accounts through the variable ES_PASSWORD
  #   * the mysql root password through the var MYSQL_ROOT_PASSWORD
  #   * the default index pattern used in kibana via the var DEFAULT_INDEX_PATTERN
  #   * the ES heap size through tt
  # REF: https://docs.docker.com/compose/compose-file/#variable-substitution
  #                    
  elasticsearch:
    build:
      context: config/elasticsearch/
    container_name: elasticsearch
    hostname: elasticsearch
    environment:
      - http.host=0.0.0.0
      - transport.host=127.0.0.1
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}"
    mem_limit: ${ES_MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/elasticsearch.yml
      - esdata:/usr/share/elasticsearch/data
    #Port 9200 is available on the host
    ports: ['9200:9200']
    #Healthcheck to confirm availability of ES. Other containers wait on this.
    healthcheck:
      test: ["CMD", "curl","-s" ,"-f", "-u", "elastic:${ES_PASSWORD}", "http://localhost:9200/_cat/health"]
    #Internal network for the containers
    networks: ['stack']
  kibana:
    build:
      context: config/kibana/
    container_name: kibana
    hostname: kibana
    volumes:
      - ./config/kibana/kibana.yml:/usr/share/kibana/kibana.yml
    #Port 5601 accessible on the host
    ports: ['5601:5601']
    networks: ['stack']
    #We don't start Kibana until the ES instance is ready
    depends_on: ['elasticsearch']
    environment:
      - "ELASTICSEARCH_PASSWORD=${ES_PASSWORD}"
    healthcheck:
      test: ["CMD", "curl", "-s", "-f", "http://localhost:5601/login"]
      retries: 6
  #Filebeat container
  filebeat:
    build:
      context: config/beats/filebeat
    container_name: filebeat
    hostname: filebeat
    user: root
    volumes:
      #Mount the filebeast configuration so users can make edit
      - ./config/beats/filebeat/dashboards/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      #Mount the prospectors directory. Users can in turn add propspectors to this directory and they will be dynamically loaded
      #- ./config/beats/filebeat/prospectors.d/:/usr/share/filebeat/prospectors.d/
      #Mount the test volume
      #- ./.kitchen/logs/:/usr/share/filebeat/logs/
      #Named volume fsdata. This is used to persist the registry file between restarts, so to avoid data duplication
      - fbdata:/usr/share/filebeat/data/
    networks: ['stack']
    command: filebeat -e -strict.perms=false
    restart: on-failure
    depends_on: ['elasticsearch','kibana']

  #Configure Stack container. This short lived container configures the stack once Elasticsearch is available.
  #More specifically, using a script it sets passwords and sets a default index pattern.
  configure_stack:
    container_name: configure_stack
    image: docker.elastic.co/beats/metricbeat:7.6.2
    volumes: ['./init/configure-stack.sh:/usr/local/bin/configure-stack.sh:ro']
    command: ['/bin/bash', '-c', 'cat /usr/local/bin/configure-stack.sh | tr -d "\r" | bash']
    networks: ['stack']
    environment: ['ELASTIC_VERSION=7.6.2','ES_PASSWORD=${ES_PASSWORD}','DEFAULT_INDEX_PATTERN=${DEFAULT_INDEX_PATTERN}']
    depends_on: ['elasticsearch','kibana']
volumes:
  #Es data
  esdata:
    driver: local
  #Filebeat data i.e. registry file
  fbdata:
    driver: local
networks: {stack: {}}

Well, I would like to do something similar when using container within ELK docker.

Hope somebody could help me,

Hey @nb03briceno,

Assuming that you want to run everything with docker compose, I see you have a short-living container to configure the stack. You could use a similar pattern to run filebeat setup.

Something like this:

  install_dashboards:
    container_name: install_dashboards
    image: docker.elastic.co/beats/filebeat:7.6.2
    volumes:
      - './init/install-dashboards.sh:/usr/local/bin/install-dashboards.sh:ro'
      - './filebeat_dashboards.yml:/usr/share/filebeat/filebeat.yml'
    command: ['/bin/bash', '-c', 'cat /usr/local/bin/install-dashboards.sh | tr -d "\r" | bash']
    networks: ['stack']
    environment: ['ELASTIC_VERSION=7.6.2','ES_PASSWORD=${ES_PASSWORD}','DEFAULT_INDEX_PATTERN=${DEFAULT_INDEX_PATTERN}']
    depends_on: ['elasticsearch','kibana']

The install-dashboards.sh script should check for the availability of kibana and elasticsearch, and then run filebeat setup --dashboards -c /usr/share/filebeat/filebeat.yml.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.