Greetings,
I have been trying to have filebeat running on a swarm cluster, with what looked like quite a basic configuration (according to me!).
Using logstash's gelf driver to direct log into logstash works well. But this does not work so well using filebeat : I am getting stuck on the filebeat-to-whatever connection part .
I keep getting messages such as :
Failed to connect to backoff(whatever(http://localhost:XXX)): Get http://localhost:XXXX: dial tcp 127.0.0.1:XXXX: connect: connection refused
I have tried to swith between logstash and Elasticsearch, both published as services on the same devoted network, but both of them encountered the same problem.
I have tried to switch host from 'localhost' to '127.0.0.1' and '<swarm ip>', but I keep getting the connection refused.
I have also tried to change the inputs (autodiscover, prospectors, basic inputs), but this does not sound like it is related to input. It sounds like it is related to the output... But I can't figure out how I should proceed here!
My compose is :
version: "3.4"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
ports:
- "9200:9200"
networks:
- elk-net
volumes:
- "/var/opt/data/flat/gerdce/shared/logs/elastic:/usr/share/elasticsearch/data"
logging:
driver: json-file
logstash:
image: docker.elastic.co/logstash/logstash:6.4.2
networks:
- elk-net
ports:
- "12201:12201/udp"
- "5044:5044"
- "9600:9600"
volumes:
- "/var/opt/data/flat/gerdce/shared/logs/logstash/conf/logstash.yml:/usr/share/logstash/logstash.yml"
- "/var/opt/data/flat/gerdce/shared/logs/logstash/conf/logstash.conf:/usr/share/logstash/pipeline/logstash.conf"
logging:
driver: json-file
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.4.2
ports:
- "5601:5601"
networks:
- elk-net
volumes:
- "/var/opt/data/flat/gerdce/shared/logs/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml"
depends_on: ['elasticsearch']
logging:
driver: json-file
depends_on:
- elasticsearch
filebeat:
image: docker.elastic.co/beats/filebeat:6.4.2
user: root
networks:
- elk-net
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:rw # logs from container
- /var/run/docker.sock:/var/run/docker.sock:ro # get that metadata
- filebeat_registry:/usr/share/filebeat/data # log registry; log cache
- /var/lib/docker/volumes:/var/lib/docker/volumes:ro # log volumes from container
configs:
- source: fb_config
target: /usr/share/filebeat/filebeat.yml
deploy:
mode: global
depends_on:
- logstash
configs:
fb_config:
file: /var/opt/data/flat/gerdce/shared/logs/filebeat/conf/filebeat.yml
networks:
elk-net:
external: true
volumes:
filebeat_registry:
driver: 'local'
logstash.yml :
config.debug: true
http.host: 0.0.0.0
log.level: debug
node.name: dvgerdrh1
path.config: /usr/share/logstash/pipeline
logstash.conf:
input {
beats {
port => 5044
}
}
filebeat.yml (with my many attempts!):
#filebeat.autodiscover:
# providers:
# - type: docker
# templates:
# - condition:
# contains:
# docker.container.image: nginx
# config:
# - type: docker
# containers.ids:
# - ${data.docker.container.id}
#filebeat.inputs:
#- type: log
# paths:
# - '/var/lib/docker/containers/*/*.log'
filebeat.prospectors:
- type: log
paths:
- '/var/lib/docker/containers/*/*.log'
json.message_key: log
json.keys_under_root: true
processors:
- add_docker_metadata: ~
#processors:
# - add_docker_metadata:
# host: "unix:///var/run/docker.sock"
logging.metrics.enabled: false
output.logstash:
hosts: ["localhost:5044"]
bulk_max_size: 4096
#output.elasticsearch:
# hosts: ["localhost:9200"]
additional informations:
- Docker version 18.06.0-ce, build 0ffa825
- docker-compose version 1.14.0, build c7bdf9e
- swarm mode activated
- elk-net is created with an "overlay" driver
- I know I should use configs instead of volumes
Any idea on what I am doing wrong?