Filebeat : "Failed to connect to backoff"


(Roger L.) #1

Greetings,

I have been trying to have filebeat running on a swarm cluster, with what looked like quite a basic configuration (according to me!).

Using logstash's gelf driver to direct log into logstash works well. But this does not work so well using filebeat : I am getting stuck on the filebeat-to-whatever connection part .
I keep getting messages such as :

Failed to connect to backoff(whatever(http://localhost:XXX)): Get http://localhost:XXXX: dial tcp 127.0.0.1:XXXX: connect: connection refused

I have tried to swith between logstash and elastic search, both published as services on the same devoted network, but both of them encountered the same problem.

I have tried to switch host from 'localhost' to '127.0.0.1' and '<swarm ip>', but I keep getting the connection refused.

I have also tried to change the inputs (autodiscover, prospectors, basic inputs), but this does not sound like it is related to input. It sounds like it is related to the output... But I can't figure out how I should proceed here!

My compose is :

version: "3.4"

services:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
    ports:
     - "9200:9200"
    networks:
     - elk-net
    volumes:
     - "/var/opt/data/flat/gerdce/shared/logs/elastic:/usr/share/elasticsearch/data"
    logging:
      driver: json-file

  logstash:
    image: docker.elastic.co/logstash/logstash:6.4.2
    networks:
     - elk-net
    ports:
     - "12201:12201/udp"
     - "5044:5044"
     - "9600:9600"
    volumes:
     - "/var/opt/data/flat/gerdce/shared/logs/logstash/conf/logstash.yml:/usr/share/logstash/logstash.yml"
     - "/var/opt/data/flat/gerdce/shared/logs/logstash/conf/logstash.conf:/usr/share/logstash/pipeline/logstash.conf"
    logging:
      driver: json-file
    depends_on:
    - elasticsearch

  kibana:
    image: docker.elastic.co/kibana/kibana:6.4.2
    ports:
     - "5601:5601"
    networks:
     - elk-net
    volumes:
     - "/var/opt/data/flat/gerdce/shared/logs/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml"
    depends_on: ['elasticsearch']
    logging:
      driver: json-file
    depends_on:
    - elasticsearch


  filebeat:
    image: docker.elastic.co/beats/filebeat:6.4.2
    user: root
    networks:
     - elk-net
    volumes:
    - /var/lib/docker/containers:/var/lib/docker/containers:rw # logs from container
    - /var/run/docker.sock:/var/run/docker.sock:ro # get that metadata
    - filebeat_registry:/usr/share/filebeat/data # log registry; log cache
    - /var/lib/docker/volumes:/var/lib/docker/volumes:ro # log volumes from container
    configs:
      - source: fb_config
        target: /usr/share/filebeat/filebeat.yml
    deploy:
      mode: global
    depends_on:
    - logstash


configs:
  fb_config:
    file: /var/opt/data/flat/gerdce/shared/logs/filebeat/conf/filebeat.yml

networks:
  elk-net:
    external: true

volumes:
  filebeat_registry:
    driver: 'local'

logstash.yml :

config.debug: true
http.host: 0.0.0.0
log.level: debug
node.name: dvgerdrh1
path.config: /usr/share/logstash/pipeline       

logstash.conf:

input {
  beats {
    port => 5044
  }
}

filebeat.yml (with my many attempts!):

 #filebeat.autodiscover:
#  providers:
#    - type: docker
#      templates:
#        - condition:
#            contains:
#              docker.container.image: nginx
#          config:
#          - type: docker
#            containers.ids:
#            - ${data.docker.container.id}

#filebeat.inputs:
#- type: log
#  paths:
#  - '/var/lib/docker/containers/*/*.log'

filebeat.prospectors:
- type: log
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  processors:
  - add_docker_metadata: ~

#processors:
# - add_docker_metadata:
#     host: "unix:///var/run/docker.sock"

logging.metrics.enabled: false


output.logstash:
  hosts: ["localhost:5044"]
  bulk_max_size: 4096

#output.elasticsearch:
#  hosts: ["localhost:9200"]

additional informations:

  • Docker version 18.06.0-ce, build 0ffa825
  • docker-compose version 1.14.0, build c7bdf9e
  • swarm mode activated
  • elk-net is created with an "overlay" driver
  • I know I should use configs instead of volumes

Any idea on what I am doing wrong?


(Noémi Ványi) #2

This is a network error. Are you sure the output is reachable from the container Filebeat runs inside? Do you see any errors in the logs of the output?


(Roger L.) #3

Hi,

The output (logstash) should be reachable, at least from any service upon the elk-net network.
A service (emilevauge/whoami) deployed on the same network is able to reach logstash.

A telnet to localhost 5044 shows that it is reachable (from an outside POV, though).

And I don't see any errors in both elastic search or logstash logs, and no network-related WARNINGS.


(Roger L.) #4

New developpement :

if I add protocol, - "5044:5044/tcp" to logstash, remove the elk-network and configure filebeat to publish to [<swarm ip>:5044]... it eventually works!
This also sound like I cannot connect through tcp on a container on the same docker network.... or not the way I expected!

docker-compose.yml

... 
 logstash:
    image: docker.elastic.co/logstash/logstash:6.4.2
    ports:
     - "12201:12201/udp"
     - "5044:5044/tcp"
     - "9600:9600"
...

logstash.conf

...
output.logstash:
  hosts: ["<swarm ip>:5044"]
  bulk_max_size: 4096
...

(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.