Filebeat send log to kafka with haproxy module

Hello!
I am currently experiencing a problem to load haproxy module to parse logs and send it to my kafka servers.
The system module has been enabled and verified using "filebeat modules list"

filebeat version 7.13.3 (amd64), libbeat 7.13.3
haproxy module(/etc/filebeat/modules.d/haproxy.yml) conf:

- module: haproxy
  # All logs
  log:
    enabled: true
    var.input: "file"
    var.paths: ["/var/log/haproxy.log"]

But i still get message line as pline text, not json format.

Expected Results : parsed logs

filebeat modules list should have haproxy module enabled, not system. This is more or less how the output of filebeat modules list should look like:

Enabled:
haproxy

Disabled:
apache
auditd
elasticsearch
icinga
iis
kafka
kibana
logstash
mongodb
mysql
nats
nginx
osquery
pensando
postgresql
redis
santa
system
traefik

On the other side, can you also paste your filebeat config file formatted in markdown, please? I edited your message to format it.

Thank you for reply. This is my mistake about module. Haproxy module, of course.

filebeat modules list
Enabled:
haproxy

Disabled:
activemq
apache
path.home : /usr/share/filebeat
path.config : /etc/filebeat
path.data : /var/lib/filebeat

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

max_procs: 1

processors:

- add_fields:
    target: ""
    fields:
      env: "test"

# https://www.elastic.co/guide/en/beats/filebeat/master/configuring-internal-queue.html
queue.mem:
  events: 4096
  flush:
    min_events: 2048
    timeout: 1s

output.kafka:
  hosts:
    - server1:9092
    - server2:9092
  topic: test-logs
  client_id: test_logs
  partition.round_robin:
    reachable_only: true
  required_acks: 1
  compression: lz4
  max_message_bytes: 5000000

logging.level: info
logging.to_files: true
logging.files:
  name: filebeat.log
  path: /var/log/filebeat
  keepfiles: 7
  permissions: 0755

What do you mean exactly? Are you expecting parsed logs to be sent to Kafka? Filebeat doesn't parse the HAproxy logs, all of the parsing of done in the elasticsearch ingest pipeline.

Yes, i expecting parsed logs from filebeat haproxy module.
I supposed, this module should be filebeat.inputs, and then filebeat will send parsed logs to Kafka.
How does this module should work?

So a module is a grouping of Filebeat inputs and processors, Elasticsearch ingest pipelines, Kibana dashboards, Elasticsearch field mappings.... Some modules do more processing on the Filebeat side however most do the majority of the processing on the Elasticsearch ingest pipeline side and that is the way it will be with the Elastic Agent. For those modules the events outputted from Filebeat will be unparsed.

1 Like

Ok, thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.