Good evening,
During an internship in computer science (so I start with Docker, K8S and the Elastic stack), I'm asked to set up a log retrieval environment using the Elastic stack under ECK.
For the moment, I'm training locally with only docker-compose, Filebeat, ES and Kibana (to see later if it will be relevant to use Logstash). My work environment is :
- Windows 10 PRO with Docker-Desktop
 - WSL/Ubuntu 18.04/Terminator
 - Elastic Stack in version 7.7.0
 
I'm in the following situation:
- I have two nginx containers (basically used with F5 or ctrl+F5 on the welcome page)
 - I have two Filebeat containers
 - I have two ES containers
 - I have one Kibana container
 
Here is the docker-compose.yml file:
version: '3.3'
services:
  nginx1:
    container_name: nginx_app1
    image: nginx
    volumes:
      - /c/nginx/serv_1/logs:/var/log/nginx
    ports:
      - 80:80
    networks:
      - elk_net
  nginx2:
    container_name: nginx_app2
    image: nginx
    volumes:
      - /c/nginx/serv_2/logs:/var/log/nginx
    ports:
      - 81:80
    networks:
      - elk_net
  filebeat1:
    build:
      context: filebeat1/
    container_name: filebeat1
    hostname: filebeat1
    volumes:
      - /c/nginx/serv_1/logs:/usr/share/filebeat/nginxlogs:ro
      - /var/run/docker.sock:/var/run/docker.sock
    links:
      - es01
    depends_on:
      - es01
    networks:
      - elk_net
  filebeat2:
    build:
      context: filebeat2/
    container_name: filebeat2
    hostname: filebeat2
    volumes:
      - /c/nginx/serv_2/logs:/usr/share/filebeat/nginxlogs:ro
      - /var/run/docker.sock:/var/run/docker.sock
    links:
      - es01
    depends_on:
      - es01
    networks:
      - elk_net
  es01:
    build:
      context: elasticsearch/es1/
    hostname: elasticsearch
    container_name: elasticsearch
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02
      - cluster.initial_master_nodes=es01,es02
      - bootstrap.memory_lock=true
      - "xpack.security.enabled: false"
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - /c/es/es-data1:/usr/share/elasticsearch/data
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - 9200:9200
      - 9300:9300
    networks:
      - elk_net
  es02:
    build:
      context: elasticsearch/es2/
    hostname: elasticsearch2
    container_name: elasticsearch2
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01
      - cluster.initial_master_nodes=es01,es02
      - bootstrap.memory_lock=true
      - "xpack.security.enabled: false"
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - /c/es/es-data2:/usr/share/elasticsearch/data
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - elk_net
  kibana:
    image: docker.elastic.co/kibana/kibana:7.7.0
    container_name: kibana
    environment:
      - "LOGGING_QUIET=true"
    links:
      - es01
    depends_on:
      - es01
    ports:
      - 5601:5601
    networks:
      - elk_net
networks:
  elk_net:
    driver: bridge
And here are my filebeat.yml and elasticsearch.yml files :
1st filebeat.yml file:
filebeat.config:
  modules:
    path: /usr/share/modules.d/*.yml
    reload.enabled: false
filebeat.modules:
  - module: nginx
    access:
      var.paths: ["/usr/share/filebeat/nginxlogs/access.log"]
    error:
      var.paths: ["/usr/share/filebeat/nginxlogs/error.log"]
output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  index: "filebeat1-%{[beat.version]}-%{+yyyy.MM.dd}"
setup.template.name: "filebeat1"
setup.template.pattern: "filebeat1-*"
setup.kibana:
  host: "http://localhost:5601"
2nd filebeat.yml file:
filebeat.config:
  modules:
    path: /usr/share/modules.d/*.yml
    reload.enabled: false
filebeat.modules:
  - module: nginx
    tags:
    access:
      var.paths: ["/usr/share/filebeat/nginxlogs/access.log"]
    error:
      var.paths: ["/usr/share/filebeat/nginxlogs/error.log"]
output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  index: "filebeat2-%{[beat.version]}-%{+yyyy.MM.dd}"
setup.template.name: "filebeat2"
setup.template.pattern: "filebeat2-*"
setup.kibana:
  host: "http://localhost:5601"
1st elasticsearch.yml file:
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
xpack.security.enabled: false
discovery.zen.minimum_master_nodes: 2
2nd elasticsearch.yml file:
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
xpack.security.enabled: false
discovery.zen.minimum_master_nodes: 2
kibana.yml file:
server.port: 5601
server.host: localhost
At first glance, everything seems to work as it is BUT...:
- 
I can't create a different index per nginx server: all logs (access and error) from both nginx servers end up in the same ES index. I tried writing different index names in the 2 filebeat.yml files in the
output.elasticsearchcode snippet but it doesn't run. Worth, the only index I get in ES and Kibana is simply named filebeat-7.7.0 without the date in this name... - 
from one day to another, new logs accumulate in the same index (I would have liked a new index per day and per nginx server)
 - 
globally, I'm not at all sure that the archictecture of the stack I propose is correct: I opted for the filebeat configuration with modules: I couldn't get the whole thing to work with the input configuration or the autodiscover. Also, I'm not at all sure this is the right way to set up a 2-node ES cluster
 
Later, I'll have to:
- add other Beats components (MetricBeats...)
 - maybe add Logstash if this component is useful
 - finally, put everything in ECK
 
Do you have any suggestions to improve all this, please? I'll probably have more questions later (There are a lot of parameters I don't understand yet.).
Thank you in advance for your help.