Filebeat - unknown index name/nginx log formatting

Hello,

I have the following configuration file for filebeat:

    # filebeat.yml for harvesting docker logfiles
    #=========================== Filebeat inputs =============================
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
       - "/var/log/*.log"
      module:
        - system
      fields_under_root: true

#==================== Elasticsearch template setting ==========================
setup.ilm.enabled: true
setup.ilm.rollover_alias: "dms-logs"

#================================ General =====================================

# The name of the Beat. If this option is empty, the hostname of the server is used.
# The name is included as the agent.name field in each published transaction.
# You can use the name to group all transactions sent by a single Beat.
#name: "{{ inventory_hostname }}""

# The tags of the shipper are included in their own field with each
# transaction published.
# tags: ["dev"]

# Optional fields that you can specify to add additional information to the output.
#fields:
#  env: dev


#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  hosts: ["dev-logging.my_domain-webtools.eu:9200"]

#================================ Processors =====================================
processors:
#  - add_docker_metadata: ~
  - decode_json_fields:
      fields: ["message"]
      target: "app"
      overwrite_keys: true
      add_error_key: true
# wenn fields aus den obigen json logs nicht unter dem key "alf" liegen sollen, muss file gedropped werden, da es sonst knallt
#  - drop_fields:
#      when:
#        contains:
#          container.image.name: "adb-eakte"
#      fields: ["file"]
#      ignore_missing: true

logging.level: info
logging.to_syslog: false
logging.to_files: true
logging.to_stderr: true

# Autodiscovery, wenn nur einzelne Docker Container gescannt werden sollen

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: nginx
          config:
            - type: container
              paths:
                - '/var/lib/docker/containers/${data.docker.container.id}/*.log'
            - module: nginx
              access:
                enabled: true
                containers:
                  stream: "stdout"
              error:
                enabled: true
                containers:
                  stream: "stderr"
        - condition:
            contains:
              docker.container.image: postgres
          config:
            - type: container
              paths:
                - '/var/lib/docker/containers/${data.docker.container.id}/*.log'
            - module: postgresql
        - condition:
            equals:
              docker.container.labels:
                eu.my_domainkom.application: "dms"
          config:
            - type: container
              paths:
                - '/var/lib/docker/containers/${data.docker.container.id}/*.log'



setup.kibana:
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify an additional path, the scheme is required: http://localhost:5601/path
  host: "http://dev-logging.my_domain-webtools.eu"

setup.dashboards.enabled: true

I have two questions:

  1. why does filebeat create a default 'filebeat' index whenever I run set up?
  2. are the module directive correctly added in the configuration? For instance, for nginx I get logs scraped from the nginx container, but the in discovery I see a whole message like:
	some_ip_address - - [16/Sep/2020:12:31:12 +0000] "https://dms-dev.breitbandausschreibungen.de" "POST /alfresco/api/-default-/public/search/versions/1/search HTTP/1.1" 200 122 "-" "-"

Shouldn't these chunks be formatted and separeted by IP, HTTP Method, URL and all that?
Or where should I be able to see that? At what point do I create the separation?

Thanks!

  1. why does filebeat create a default 'filebeat' index whenever I run set up?

I don't understand your question. During the setup phase all indices and ingest pipelines must be installed.

  1. are the module directive correctly added in the configuration? For instance, for nginx I get logs scraped from the nginx container, but the in discovery I see a whole message like:

Did you check documents stored in indices? Do they contain proper fields?

  1. The index that I'm using is called "dms-logs-*", this is what I'm querying and currently using.
    But during the setup it adds the "filebeat" index. I would like to remove that.

  2. I'm not sure how I can query the documents in the indices by the proper fields. Perhaps you could give me a hand with that.

Thanks!

Is there an unwritten rule on this forum? I've noticed this on several occasions - helping only people who kind of already know the answer?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.