Dockerized Filebeat not sending logs to Logstash

Hello!

I'm attempting to setup Filebeat to send logs to Logstash. Everything is running in docker, so I'm attempting to use the autodiscover feature. I'm using v6.3.2 across the board.

I can see the index in kibana for project-trackr-api- from logstash, but a filebeat index is never created. I also have an elastic-apm server running in this same docker configuration that sends logs directly to elasticsearch fine

Any one have any ideas what I'm doing wrong?

Thanks!

filebeat.yml

filebeat.registry_file: /usr/share/filebeat/data/registry

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.name: project_trackr_*
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
              exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines

filebeat.inputs:
  - type: docker
    containers.ids:
      - "*"


setup.kibana.host: "kibana:5601"

output.logstash:
  hosts: ["logstash:5044"]

logstash.conf

input {
    udp {
        host => "0.0.0.0"
        port => 5044
        codec => json_lines
        type => "projecttrackr-api"
    }
    beats {
        host => "0.0.0.0"
        port => 5044
    }
}

output {
    if [type] == "projecttrackr-api" {
        elasticsearch {
            hosts => "elasticsearch:9200"
            manage_template => false
            index => "projecttrackr-api-%{+YYYY.MM.dd}"
        }
    }
    else {
        elasticsearch {
            hosts => "elasticsearch:9200"
            manage_template => false
            index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
            document_type => "%{[@metadata][type]}"
        }
    }
}

docker-compose.yml

  filebeat:
    command: filebeat -e --strict.perms=false
    container_name: "${COMPOSE_PROJECT_NAME}filebeat"
    depends_on:
      - elasticsearch
      - api
      - master
    image: "docker.elastic.co/beats/filebeat:${ELASTIC_VERSION}"
    restart: on-failure
    user: root
    volumes:
      - ./docker/filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
      - /var/run/docker.sock:/var/run/docker.sock
      - fbdata:/usr/share/filebeat/data/

  logstash:
    container_name: "${COMPOSE_PROJECT_NAME}logstash"
    build: docker/logstash
    volumes:
      - ./docker/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./docker/logstash/pipeline:/usr/share/logstash/pipeline
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    depends_on:
      - elasticsearch

When I start filebeat, I can see that there's no harvesters open

"filebeat": {"harvester":{"open_files":0,"running":0}}

Here's some other output from start up:

2018-08-09T18:19:22.479Z	INFO	[beat]	instance/beat.go:761	Process info	{"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter"}, "start_time": "2018-08-09T18:19:21.560Z"}}}
2018-08-09T18:19:22.480Z	INFO	instance/beat.go:225	Setup Beat: filebeat; Version: 6.3.2
2018-08-09T18:19:22.488Z	INFO	pipeline/module.go:81	Beat name: b2143bb44a14
2018-08-09T18:19:22.489Z	INFO	[monitoring]	log/log.go:97	Starting metrics logging every 30s
2018-08-09T18:19:22.489Z	INFO	instance/beat.go:315	filebeat start running.
2018-08-09T18:19:22.489Z	INFO	registrar/registrar.go:117	Loading registrar data from /usr/share/filebeat/data/registry
2018-08-09T18:19:22.489Z	INFO	registrar/registrar.go:124	States Loaded from registrar: 13
2018-08-09T18:19:22.489Z	WARN	beater/filebeat.go:354	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-08-09T18:19:22.489Z	INFO	crawler/crawler.go:48	Loading Inputs: 1
2018-08-09T18:19:22.490Z	WARN	[cfgwarn]	docker/input.go:29	EXPERIMENTAL: Docker input is enabled.
2018-08-09T18:19:22.490Z	INFO	log/input.go:118	Configured paths: [/var/lib/docker/containers/*/*.log]
2018-08-09T18:19:22.490Z	INFO	input/input.go:88	Starting input of type: docker; ID: 2482010575008624874 
2018-08-09T18:19:22.490Z	INFO	crawler/crawler.go:82	Loading and starting Inputs completed. Enabled inputs: 1
2018-08-09T18:19:22.490Z	WARN	[cfgwarn]	docker/docker.go:34	BETA: The docker autodiscover is beta
2018-08-09T18:19:22.496Z	INFO	autodiscover/autodiscover.go:76	Starting autodiscover manager
1 Like

I think the problem is that you don't mount the path with the docker logs into your container. You have to mount /var/lib/docker/containers.

Yes! Thank you. That works

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.