Filebeat est en cours d'exécution mais n'envoie pas de logs à logstash

J'essaie d'installer filebeat 7.2.0 sur ma machine.
filebeat est bien lancé mais n'envoie pas de logs à logstash.

Quand je regarde les logs de filebeat en tapant journalctl -u filebeat -e

2019-07-03T16:35:32.698+0200        DEBUG        [input]        input/input.go:152        Run input
 2019-07-03T16:35:32.698+0200        DEBUG        [input]        log/input.go:187        Start next scan
 2019-07-03T16:35:32.699+0200        DEBUG        [input]        log/input.go:417        Check file for harvesting: /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log
 2019-07-03T16:35:32.699+0200        DEBUG        [input]        log/input.go:507        Update existing file for harvesting: /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log, offset: 1357956
 2019-07-03T16:35:32.699+0200        DEBUG        [input]        log/input.go:559        Harvester for file is still running: /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log
 2019-07-03T16:35:32.699+0200        DEBUG        [input]        log/input.go:208        input states cleaned up. Before: 1, After: 1, Pending: 0
 2019-07-03T16:35:34.183+0200        ERROR        pipeline/output.go:100        Failed to connect to backoff(async(tcp://[http://192.168.93.11:5044]:5044)): lookup http://192.168.93.11:5044: no such host
 2019-07-03T16:35:34.183+0200        INFO        pipeline/output.go:93        Attempting to reconnect to backoff(async(tcp://[http://192.168.93.11:5044]:5044)) with 101 reconnect attempt(s)
 2019-07-03T16:35:34.183+0200        DEBUG        [logstash]        logstash/async.go:111        connect
 2019-07-03T16:35:34.183+0200        WARN        transport/tcp.go:53        DNS lookup failure "http://192.168.93.11:5044": lookup http://192.168.93.11:5044: no such host
 2019-07-03T16:35:42.701+0200        DEBUG        [input]        input/input.go:152        Run input
 2019-07-03T16:35:42.701+0200        DEBUG        [input]        log/input.go:187        Start next scan
 2019-07-03T16:35:42.701+0200        DEBUG        [input]        log/input.go:417        Check file for harvesting: /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log
 2019-07-03T16:35:42.701+0200        DEBUG        [input]        log/input.go:507        Update existing file for harvesting: /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log, offset: 1357956
 2019-07-03T16:35:42.701+0200        DEBUG        [input]        log/input.go:559        Harvester for file is still running: /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log
 2019-07-03T16:35:42.701+0200        DEBUG        [input]        log/input.go:208        input states cleaned up. Before: 1, After: 1, Pending: 0
 2019-07-03T16:35:52.701+0200        DEBUG        [input]        input/input.go:152        Run input
 2019-07-03T16:35:52.701+0200        DEBUG        [input]        log/input.go:187        Start next scan
 2019-07-03T16:35:52.701+0200        DEBUG        [input]        log/input.go:417        Check file for harvesting: /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log
 2019-07-03T16:35:52.701+0200        DEBUG        [input]        log/input.go:507        Update existing file for harvesting: /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log, offset: 1357956
 2019-07-03T16:35:52.701+0200        DEBUG        [input]        log/input.go:559        Harvester for file is still running: /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log
 2019-07-03T16:35:52.701+0200        DEBUG        [input]        log/input.go:208        input states cleaned up. Before: 1, After: 1, Pending: 0
 2019-07-03T16:35:53.804+0200        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":540,"time":{"ms":2}},"total":{"ticks":2670,"time":{"ms":31},"value":2670},"user":{"ticks":2130,"time":{"ms":29}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":6},"info":{"ephemeral_id":"3abedc7c-fa01-4fd2-a674-b7547afbe133","uptime":{"ms":4353031}},"memstats":{"gc_next":33968416,"memory_alloc":17008152,"memory_total":149989304},"runtime":{"goroutines":24}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":4117,"retry":2048}}},"registrar":{"states":{"current":2}},"system":{"load":{"1":3.34,"15":3.17,"5":3.17,"norm":{"1":0.835,"15":0.7925,"5":0.7925}}}}}}


Je ne comprends pas
Failed to connect to backoff(async(tcp://[http://192.168.93.11:5044]:5044)): lookup http://192.168.93.11:5044: no such host

J'arrive à faire ping vers la machine où se trouve logstash et je sais que le port 5044 est ouvert.

voici mon filebeat.yml

#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log
    #- c:\programdata\elasticsearch\logs\*
  fields:  {log_type: frontend_proxy}

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1

#============================== Dashboards =====================================
setup.dashboards.enabled: true

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  host: "192.168.93.11:5601"

#================================ Outputs =====================================
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["http://192.168.93.11:9200"]

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["http://192.168.93.11:5044"]

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================
logging.level: debug

logging.selectors: ["*"]

Merci d'avance.

-->

Hi there @Momo,

I'm very sorry I don't speak french. It looks like you are configuring HTTP, but Logstash uses the lumberjack TCP protocol.

Can you remove HTTP from your configuration?

output.logstash:
  # The Logstash hosts
  hosts: ["192.168.93.11:5044"]

Hi @pmercado ,

It works. Thanks a lot for your help.

Sorry for my late answer.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.