Filebeat multi outputs

Bonjour,

J'ai un filebeat (opérationnel) qui envoie des logs vers logstash puis elasticsearch.

Je voudrai activer le module haproxy de filebeat pour qu'il envoie les logs haproxy vers elasticsearch mais quand je lance la commande : filebeat setup -e

J'ai l'erreur suivante :

Exiting: error unpacking config data: more than one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')

J'ai vu qu'on peut pas avoir plusieurs outputs avec filebeat.

Comment faire dans ce cas présent ?

Mon objectif est de profiter des dashboards haproxy et des mapping déjà présent dans le module.

Ci-dessous le contenu de mon fichier filebeat.yml :

#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["http://xxx.xxx.xx.xx:9200"]

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["xxx.xxx.xx.xx:5044"]

Merci d'avance.

Quelle version est-ce ?

A dire vrai je ne sais pas d'où vient ce message. Peut-être que @ruflin sait ?

Ma version filebeat version 6.5.2.

J'ai lu ça ici :

Est-il possible de "share" la configuration entière?

(sorry for my french)

Salut,

@ruflin

Voici le contenu de mon fichier filebeat.yml :

#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log
  # Change to true to enable this input configuration.
  enabled: true
  paths:
    - /var/lib/docker/volumes/logger_logs/_data/frontend_proxy.log
   fields:  {log_type: frontend_proxy}
  
- type: log
  paths:
    - /var/lib/docker/volumes/logger_logs/_data/transcity_rpe.log
  fields:  {log_type: transcity_rpe}
  ### Multiline options

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  multiline.pattern: '^.+\]: 20[0-9]{2,2}-[0-9]{2,2}-[0-9]{2,2}T[0-9]{2,2}:[0-9]{2,2}:[0-9]{2,2}\.[0-9]{3,3}\+[0-9]{2,2}:[0-9]{2,2}'
  # Defines if the pattern set under pattern should be negated or not. Default is false.
  multiline.negate: true
  multiline.match: after
  
- type: log
  paths:
    - /var/lib/docker/volumes/logger_logs/_data/transcity_daq.log
  fields:  {log_type: transcity_daq}
  multiline.pattern: '^.+\]: 20[0-9]{2,2}-[0-9]{2,2}-[0-9]{2,2}T[0-9]{2,2}:[0-9]{2,2}:[0-9]{2,2}\.[0-9]{3,3}\+[0-9]{2,2}:[0-9]{2,2}'
  multiline.negate: true
  multiline.match: after
  
- type: log
  paths:
    - /var/lib/docker/volumes/logger_logs/_data/transcity_acm.log
  fields:  {log_type: transcity_acm}
  multiline.pattern: '^.+\]: 20[0-9]{2,2}-[0-9]{2,2}-[0-9]{2,2}T[0-9]{2,2}:[0-9]{2,2}:[0-9]{2,2}\.[0-9]{3,3}\+[0-9]{2,2}:[0-9]{2,2}'
  multiline.negate: true
  multiline.match: after

- type: log
  paths:
    - /var/lib/docker/volumes/logger_logs/_data/transcity_alm.log
  fields:  {log_type: transcity_alm}
  multiline.pattern: '^.+\]: 20[0-9]{2,2}-[0-9]{2,2}-[0-9]{2,2}T[0-9]{2,2}:[0-9]{2,2}:[0-9]{2,2}\.[0-9]{3,3}\+[0-9]{2,2}:[0-9]{2,2}'
  multiline.negate: true
  multiline.match: after

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  reload.period: 10s

#==================== Elasticsearch template setting ==========================
setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false
#============================== Dashboards =====================================
setup.dashboards.enabled: true
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
  host: "xxx.xxx.xx.xx:5601"
#================================ Outputs =====================================

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["http://xxx.xxx.xx.xx:9200"]
#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["xxx.xxx.xx.xx:5044"]
#================================ Procesors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
#================================ Logging =====================================
#============================== Xpack Monitoring ===============================
#xpack.monitoring.elasticsearch:
xpack.monitoring:
  enabled: true
  elasticsearch:
    hosts: ["http://xxx.xxx.xx.xx:9200"]

Will switch to english to prevent some misunderstandings: Beats only supports 1 output enabled at the same time. Can you comment out either the logstash or the elasticsearch output and see if it works? Strangely I would expect that a different error is shown to you.

When I comment elasticsearch output it works the data is well inserted into elasticearch via logstash and I watch my data in kibana.
At first I used only logstash which works perfectly but I uncommented elasticearch output to be able to use the haproxy module.

You can only use 1 output at the time. If you comment out Logstash, it should also work.

The haproxy logs are sent to elasticsearch because I can see it in the kibana log tab but they are not parser.
see to attachment

I don't have a data display in the default haproxy graphics example (Traffic volume[Metricbeat HAProxy])

If you want the messages to get parsed, you need to use the haproxy module: https://www.elastic.co/guide/en/beats/filebeat/7.0/filebeat-module-haproxy.html Currently I think you grab the data with an input which does not have the knowledge about the processing.

The module haproxy is enabled :

[root@tup-perf4-rmq1 ~]# filebeat modules list
Enabled:
haproxy

filebeat is started :

[root@tup-perf4-rmq1 ~]# service filebeat start
Starting filebeat (via systemctl):                         [  OK  ]
```
[root@tup-perf4-rmq1 ~]# sudo journalctl -u filebeat
-- Logs begin at sam. 2019-04-27 05:53:24 CEST, end at lun. 2019-04-29 17:50:20 CEST. --
avril 29 17:46:50 tup-perf4-rmq1 systemd[1]: Stopping Filebeat sends log files to Logstash or directly to Elasticsearch....
avril 29 17:46:50 tup-perf4-rmq1 systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
avril 29 17:50:01 tup-perf4-rmq1 systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..

Exemple my haproxy log file :

Apr 29 15:23:58 d6f148871977 local0.info haproxy[1]: unix:1 [29/Apr/2019:15:23:58.981] frontend.https.devices~ https.devices.daq/https.devices.daq.1 0/0/1/2/3 202 557 - - ---- 1357/1/0/0/0 0/0 "POST /daq/devices/transactions/batch HTTP/1.1"
Apr 29 15:23:58 d6f148871977 local0.info haproxy[1]: unix:1 [29/Apr/2019:15:23:58.986] frontend.https.devices~ https.devices.tpg/https.devices.tpg.1 0/0/0/5/5 202 543 - - ---- 1357/1/0/0/0 0/0 "POST /tpg/medias/batch HTTP/1.1"
Apr 29 15:23:58 d6f148871977 local0.info haproxy[1]: unix:1 [29/Apr/2019:15:23:58.992] frontend.https.devices~ https.devices.daq/https.devices.daq.2 0/0/1/3/4 202 557 - - ---- 1357/1/0/0/0 0/0 "POST /daq/devices/transactions/batch HTTP/1.1"
Apr 29 15:23:59 d6f148871977 local0.info haproxy[1]: unix:1 [29/Apr/2019:15:23:58.997] frontend.https.devices~ https.devices.daq/https.devices.daq.1 0/0/1/2/4 202 557 - - ---- 1357/1/0/0/0 0/0 "POST /daq/devices/transactions/batch HTTP/1.1"
Apr 29 15:23:59 d6f148871977 local0.info haproxy[1]: unix:1 [29/Apr/2019:15:23:59.001] frontend.https.devices~ https.devices.daq/https.devices.daq.2 0/0/1/2/4 202 557 - - ---- 1357/1/0/0/0 0/0 "POST /daq/devices/transactions/batch HTTP/1.1"
Apr 29 15:23:59 d6f148871977 local0.info haproxy[1]: unix:1 [29/Apr/2019:15:23:59.005] frontend.https.devices~ https.devices.daq/https.devices.daq.1 0/0/1/4/5 202 557 - - ---- 1357/1/0/0/0 0/0 "POST /daq/devices/transactions/batch HTTP/1.1"
Apr 29 15:23:59 d6f148871977 local0.info haproxy[1]: unix:1 [29/Apr/2019:15:23:59.010] frontend.https.devices~ https.devices.tpg/https.devices.tpg.2 0/0/1/3/4 202 543 - - ---- 1357/1/0/0/0 0/0 "POST /tpg/medias/batch HTTP/1.1"

Content haproxy module :

- module: haproxy
  # All logs
  log:
    enabled: true

    # Set which input to use between syslog (default) or file.
    var.input: "file"

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/root/tools/log_haproxy_for_filebeat.log"]

I don't see what's missing.

Thanks for your help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.