Elastic-filebeat-nginx-kibana-docker compose

Попробуем сначала на русском.

Всем привет, я работаю с Elastic через docker compose. У меня сейчас 2 вопроса:

1- Почему у меня огромное количество отдаваемых логов от nginx? После создания index filebeat-* перехожу к логам в Discover и они бесконечно прилетают. Не понимаю, в чем может быть проблема, может я ошибся где-то с настройкой.
2- Где-то читал, что нельзя, но все же, как-то можно автоматизировать создание index pattern filebeat-*, чтобы он создавался сам? pulicy и allias у меня получилось написать в filebeat.yml.

Ниже прилагаю код всей конфигурации:

filebeat.inputs:
- type: log
  id: logs
  paths:
    - /var/log/nginx/*.log
#  scan_frequency: 30s # Установка частоты сканирования на одну минуту
  processors:
    - drop_fields:  # Удаляем поля из событий логов
        fields:
          - message
          - host.name
          - ecs
          - log
          - _id
          - _index
          - _score
          - _type
          - agent.hostname
          - agent.type
          - agent.version
          - ecs.version
          - input.type
          - suricata.eve.timestamp
          - agent.id
          - index
          - type
          - score
          - id
          - source
          - _source

output.elasticsearch:
  hosts: ["HOST:9200"]  # Указываем адрес и порт Elasticsearch
  username: "elastic"  # Указываем имя пользователя для доступа к Elasticsearch
  password: "changeme"  # Указываем пароль пользователя для доступа к Elasticsearch
  index: "filebeat-alias-%{+YYYY.MM.dd}"  # Используем алиас для индекса с динамическим форматированием даты

setup.kibana:
  host: "http://HOST:5601"  # Указываем адрес и порт Kibana
  username: "elastic"  # Указываем имя пользователя для доступа к Kibana
  password: "changeme"  # Указываем пароль пользователя для доступа к Kibana

setup.template.name: "filebeat"  # Устанавливаем имя шаблона индекса
setup.template.pattern: "filebeat-*cd "  # Устанавливаем паттерн для применения шаблона

setup.ilm.enabled: auto  # Включаем управление жизненным циклом индекса (ILM)
setup.ilm.rollover_alias: "filebeat-alias"  # Указываем алиас для переключения на новый индекс
setup.ilm.pattern: "{now/d}-000001"  # Указываем шаблон для имени нового индекса с динамическим форматированием даты
setup.ilm.policy_name: "my_ilm_policy"  # Указываем имя политики управления жизненным циклом индекса
setup.ilm.policy_file: "/etc/filebeat/ilm_policy.json"  # Указываем путь к файлу с определением политики

.env 

ELK_VERSION=7.3.1


elasticsearch

---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: HOST

## Use single node discovery in order to disable production mode and avoid bootstrap checks
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
#
discovery.type: single-node

## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true


site.conf nginx

server {
    listen       80;
    server_name  _;

    gzip on;
    gzip_disable "msie6";

    gzip_comp_level 6;
    gzip_min_length 1100;
    gzip_buffers 16 8k;
    gzip_proxied any;
    gzip_types
    text/plain
    text/css
    text/js
    text/xml
    text/javascript
    application/javascript
    application/x-javascript
    application/json
    application/xml
    application/rss+xml
    image/svg+xml;

    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_set_header X-NginX-Proxy true;
    proxy_redirect off;

    location / {
      proxy_pass http://HOST:3000;
    }
}


kibana.yml

---
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.js
#
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://HOST:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true

## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: changeme


docker-compose.yml

version: '3.2'

services:
  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /u00/elasticsearch/data
 #   ports:
 #     - "9200:9200"
 #     - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: changeme
    network_mode: "host"
#    command: sh script.sh   # Это место для команды, запускающей ваш скрипт
 #   networks:
 #     - host

#  logstash:
#    build:
#      context: logstash/
#      args:
#        ELK_VERSION: $ELK_VERSION
#    volumes:
#      - type: bind
#        source: ./logstash/config/logstash.yml
#        target: /usr/share/logstash/config/logstash.yml
#        read_only: true
#      - type: bind
#        source: ./logstash/pipeline
#        target: /usr/share/logstash/pipeline
#        read_only: true
# #   ports:
# #     - "5000:5000"
# #     - "9600:9600"
# #   expose: 
# #     - "5044"
#    environment:
#      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
#    network_mode: "host"
# #   networks:
# #     - host
#    depends_on:
#      - elasticsearch

  kibana:
    build:
      context: kibana/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
 #   ports:
 #     - "5601:5601"
    network_mode: "host"
 #  networks:
 #     - host
    depends_on:
      - elasticsearch

  app:
    build : ./app
    volumes:
      - ./app/:/usr/src/app
      - /usr/src/app/node_modules/ # make node_module empty in container
    command: npm start
 #   ports:
 #     - "3000:3000"
    network_mode: "host"
#  networks:
 #     - host

  nginx:
    build: ./nginx
    volumes:
      - ./nginx/config:/etc/nginx/conf.d
      - ./nginx/log:/var/log/nginx
 #   ports:
 #     - "80:80"
 #     - "443:443"
 #   links:
 #     - app:app
 #   depends_on: 
 #     - app
    network_mode: "host"
 #   networks:
 #     - host

  filebeat:
    build: ./filebeat
    entrypoint: "filebeat -e -strict.perms=false"
    volumes:
      - ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
      - ./nginx/log:/var/log/nginx
      - ./filebeat/config/ilm_policy.json:/etc/filebeat/ilm_policy.json
    network_mode: "host"
 #   networks:
 #     - host
    depends_on:
 #     - app
      - nginx
#      - logstash
      - elasticsearch
      - kibana
#    links: 
#      - logstash

#networks:
#  elk:
#    driver: bridge

volumes:
  elasticsearch:


ilm_policy.json

{
  "policy": {               
    "phases": {                
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_size": "50gb",
            "max_age": "1d"
          }
        }
      },
      "delete": {
        "min_age": "4d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}

Моя папка:

app
docker-compose.yml
elasticsearch
filebeat
kibana
LICENSE
logstash
nginx

@v.popov
Apologies, I took a quick look at that and it's way too complex to try to just debug.

My suggestion to you would be simplify it greatly like just run log stash on its own and have it read from a file and test your config and isolate everything else.

There's way too much going on in that config with nginx and everything else to try to debug

Так ведь это происходит не в одном файле, а в разных. Перед каждым кодом подписан файл, к которому он относится. Если я правильно понял, то 2 вопрос нужно править в filebeat, а вот 1 я не знаю, что за это отвечает, поэтому прошу помощи.

Hi @v.popov

Apologies I don't read Cyrllic (I think this is what I got from google translate)

So yes, you can create a custom filebeat index / template, etc... ... BUT here is what I would do ...

1st I would get filebeat working the way you want before you try to dockerize it. Just download the .tar.gz and get steps 2 & 3 working first then dockerize it

2nd I would just get filebeat working without changing the index name, ILM Policy etc... use all the defaults and see if you can load the data...

3rd then I would try to created your custome index, alias ILM alias etc..etc..

You that you are pretty far off on that (and it actually adds little value)

I will need to look up all the proper config for that... on such an old version.

Perhaps look at this thread.

I think this will show what you need to do

It looks like you are using 7.3 which is truly ancient version.

Also lets keep this in one Thread and not use the other one...

Putting ngnix in the middle I will not be able to help with that...

So lets break down the probablem in pieces...

Thank you very much, I'll definitely take a look! Regarding the version, it will be necessary to change, I took the old one purely for testing

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.