Mulitple Filebeat Instances

Hello,
i did setup two filebeat instances on a linux server. One for Syslog and the PANW-Module and the other for the F5-Module.
The Syslog/PANW Filebeat was the first one, i did change the index to a different one, but it automatically create a datastream.
To setup the second instance i created an additional systemd entry and copied the original etc-Directory and configured the filebeat.yml (i just change the indexname and disbaled Syslog) and activated f5. Upon starting the second instance, it creates an index without ILM and does not create an datastream.
Any Idea why the second instance does not create an datastream?

Thanks in advance!
Regards Boris

Is there a reason you are running an instance per module, a single instance should handle this.

Hello Mark,

yes there is. Since the update to 8.7.0 the Filebeat f5-Modul sends less data than before the update (Verified with parallel sending the f5-data to logstash).
I just wanted to seperate the f5-module to have own Logs and an own Index just for the data coming from the f5, to further investigate this issue.

Kind regards
Boris

@warkolm There are so many reasons there is a need for multiple agents on a host. One example, which is applicable to Elastic ecosystem itself, a customer typically needs to forward the Elasticsearch / Logstash / Kibana logs and metrics to a separate monitoring cluster. This is not possible in general, as there is already a set of agents running on this node to index system logs and metrics..

@BoKu We did some custom scripting with Ansible to achieve this, but it has been a pain tbh..

Elastic should really support multiple outputs or provide a supported way to install and mange multiple identical agents on a system. Even Elastic Agent is very limited in that way, I'm not even sure it's possible with Agent.. (eg sending system logs to production cluster and elastic logs and metrics to a separate monitoring cluster).

1 Like

@willemdh

I think the team is looking at defining an output per integration perhaps you should add your thought to this public issue...

I also see it listed on our internal tracking issues.

Your feedback is timely as I think they are gather feedback right now... this is just a discus forum your voice will be more likely heard in an issue.

BTW This is for elastic-agent, I would not expect this to get backported to filebeat "solo"

2 Likes

My 2nd thought on this, when I got it working (it has been a while), was to 1st install 1 completely separate tar.gz install for the 2 beats get it all working correctly.

Then I worked on getting them into systemctl etc...

I suspect you may be stepping on the data path... maybe ... maybe not.

1 Like

Hello and thanks for your replys,

but i had another initial question. The second instance is runing and working BUT it creates an index without ILM instead of a datastream.
I need to have an individual (something like: filebeat-f5-%{[agent.version]}) index with ILM or a working datastream.

Regards Boris

You will need to share your entire filebeat.yml and whatever module your are using otherwise we are just guessing.

Please show the exact current configuration and results

Here is the filebeat.yml

###################### Filebeat Configuration Example #########################

#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  ## Paths that should be crawled and fetched. Glob based paths.
  #paths:
  #  - /var/log/httpd/*log
  #  #- c:\programdata\elasticsearch\logs\*
  #tags: [ "httpd_rh_log" ]

- type: syslog
  enabled: false
  protocol.udp.host: "0.0.0.0:514"
  tags: [ "syslog", "udp" ]

- type: syslog
  enabled: false
  protocol.tcp.host: "0.0.0.0:514"
  tags: [ "syslog", "tcp" ]

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


#================================ General =====================================

tags: ["f5"]

# Optional fields that you can specify to add additional information to the
# output.
fields:
  env: prod

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "https://localhost:5601"
  

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id: ""


#============================= Elastic Cloud ==================================


#================================ Outputs =====================================


#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["5_HOSTS:9200]

  # Optional protocol and basic auth credentials.
  protocol: "https"
  username: "beatwriter"
  password: "${PWD}"
  ssl.certificate_authorities: ["/path/to/CA_cert"]

  #index: "filebeat-f5-%{[agent.version]}"
  index: "filebeat-f5-%{[agent.version]}"
  #setup.template.name: "filebeat"
  #setup.template.pattern: "filebeat"

#----------------------------- Logstash output --------------------------------

#================================ Setup ==========================================
setup.template.name: "filebeat-f5-%{[beat.version]}"
setup.template.pattern: "filebeat-f5-%{[beat.version]}-*"

setup.ilm.enabled: true
setup.ilm.rollover_alias: "filebeat-f5-%{[agent.version]}"
setup.ilm.policy_name: "filebeat"

#================================ Processors =====================================

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

#================================ Logging =====================================
logging:
  level: warning
  to_files: true
  to_syslog: false
  json: true
  files:
    path: '/var/log/filebeat-f5'
    name: 'filebeat'
    keepfiles: '3'
    permissions: '0644'


#============================== X-Pack Monitoring ===============================


#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

and the module-config f5.yml

# Module: f5
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.16/filebeat-module-f5.html

- module: f5
  bigipapm:
    enabled: true

    # Set which input to use between udp (default), tcp or file.
    var.input: udp
    var.syslog_host: 0.0.0.0
    var.syslog_port: 9504

    # Set paths for the log files when file input is used.
    # var.paths:

    # Toggle output of non-ECS fields (default true).
    var.rsa_fields: true
    var.keep_raw_fields: true

    # Set custom timezone offset.
    # "local" (default) for system timezone.
    # "+02:00" for GMT+02:00
    # var.tz_offset: local

  bigipafm:
    enabled: true

    # Set which input to use between udp (default), tcp or file.
    # var.input: udp
    # var.syslog_host: localhost
    # var.syslog_port: 9528

    # Set paths for the log files when file input is used.
    # var.paths:

    # Toggle output of non-ECS fields (default true).
    # var.rsa_fields: true

    # Set custom timezone offset.
    # "local" (default) for system timezone.
    # "+02:00" for GMT+02:00
    # var.tz_offset: local

What version are you on?

And what resulting index/data stream is created with this configuration?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.