Hello,
I have the following configuration file for filebeat:
# filebeat.yml for harvesting docker logfiles
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
enabled: true
paths:
- "/var/log/*.log"
module:
- system
fields_under_root: true
#==================== Elasticsearch template setting ==========================
setup.ilm.enabled: true
setup.ilm.rollover_alias: "dms-logs"
#================================ General =====================================
# The name of the Beat. If this option is empty, the hostname of the server is used.
# The name is included as the agent.name field in each published transaction.
# You can use the name to group all transactions sent by a single Beat.
#name: "{{ inventory_hostname }}""
# The tags of the shipper are included in their own field with each
# transaction published.
# tags: ["dev"]
# Optional fields that you can specify to add additional information to the output.
#fields:
# env: dev
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
hosts: ["dev-logging.my_domain-webtools.eu:9200"]
#================================ Processors =====================================
processors:
# - add_docker_metadata: ~
- decode_json_fields:
fields: ["message"]
target: "app"
overwrite_keys: true
add_error_key: true
# wenn fields aus den obigen json logs nicht unter dem key "alf" liegen sollen, muss file gedropped werden, da es sonst knallt
# - drop_fields:
# when:
# contains:
# container.image.name: "adb-eakte"
# fields: ["file"]
# ignore_missing: true
logging.level: info
logging.to_syslog: false
logging.to_files: true
logging.to_stderr: true
# Autodiscovery, wenn nur einzelne Docker Container gescannt werden sollen
filebeat.autodiscover:
providers:
- type: docker
templates:
- condition:
contains:
docker.container.image: nginx
config:
- type: container
paths:
- '/var/lib/docker/containers/${data.docker.container.id}/*.log'
- module: nginx
access:
enabled: true
containers:
stream: "stdout"
error:
enabled: true
containers:
stream: "stderr"
- condition:
contains:
docker.container.image: postgres
config:
- type: container
paths:
- '/var/lib/docker/containers/${data.docker.container.id}/*.log'
- module: postgresql
- condition:
equals:
docker.container.labels:
eu.my_domainkom.application: "dms"
config:
- type: container
paths:
- '/var/lib/docker/containers/${data.docker.container.id}/*.log'
setup.kibana:
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify an additional path, the scheme is required: http://localhost:5601/path
host: "http://dev-logging.my_domain-webtools.eu"
setup.dashboards.enabled: true
I have two questions:
- why does filebeat create a default 'filebeat' index whenever I run set up?
- are the module directive correctly added in the configuration? For instance, for nginx I get logs scraped from the nginx container, but the in discovery I see a whole message like:
some_ip_address - - [16/Sep/2020:12:31:12 +0000] "https://dms-dev.breitbandausschreibungen.de" "POST /alfresco/api/-default-/public/search/versions/1/search HTTP/1.1" 200 122 "-" "-"
Shouldn't these chunks be formatted and separeted by IP, HTTP Method, URL and all that?
Or where should I be able to see that? At what point do I create the separation?
Thanks!