Using Filebeat with Logstash breaks Metricbeat

I'm using a default configuration for Filebeat, and a standard configuration for logstash to input from beats and output to elasticsearch.

Metricbeat works fine, but once I start an instance of Filebeat, the metricbeat Infrastructure visualization breaks.

Could it be an index I need to change? Do I need to do some special setup that I'm missing? Thank you in advance, this has been breaking our system for weeks.

Metricbeat and filebeat should be largely independent. Can you share the configurations that are causing trouble? (filebeat.yml and metricbeat.yml)

Sure.

filebeat.yml:

filebeat.inputs:
- type: log
  enabled: true
  paths:
     -/my/log/path/log.out
filebeat.config:
  modules:
    enabled: false
    path: modules.d/*.yml

#output.elasticsearch.hosts: ["ESHOST1:9200", "ESHOST2:9200", "ESHOST3:9200"]
output.logstash.hosts: ["LSHOST:5044"]

setup.kibana:
  host: "KHOST: 5601"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch:
  hosts: ["ESHOST1:9200", "ESHOST2:9200", "ESHOST3:9200"]

metricbeat.yml:

metricbeat.max_start_delay: 10s

metricbeat.modules:
  - module: system
   metricsets:
      - cpu
      - load
      - memory
      - network
      - process
      - process_summary
      - uptime
      - socket_summary
    period: 10s
    process.include_top_n:
      by_cpu: 5
      by_memory: 5
- module: system
  period: 1m
  metricsets:
    - filesystem
    - fsstat
  processors:
  - drop_event.when.regexp:
      system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'

- module: system
  period: 15m
  metricsets:
    - uptime

- module: system
  period: 10s
  metricsets: ["diskio"]

- module: docker
  period: 10s
  hosts: ["unix:///var/run/docker.sock"]

#output.elasticsearch.hosts: ["ESHOST1:9200", "ESHOST2:9200", "ESHOST3:9200"]
output.logstash.hosts: ["LSHOST:5044"]

setup.kibana:
  host: "KHOST:5601"

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch:
  hosts: ["ESHOST1:9200","ESHOST2:9200", "ESHOST3:9200"]

Even after stopping all filebeats and deleting the indices, the metricbeat data still doesn't show up in kibana. I am unable to delete the template using curl, it seems to not actually delete despite returning {"acknowledged":true}

To be clear, if I don't initially set up or start any filebeats, metricbeat works perfectly.

Ah, can you share your logstash configuration as well? (Have you tried sending directly to elasticsearch rather than logstash?) If logstash is somehow routing filebeat data to the same place as metricbeat that could definitely break visualizations :slight_smile:

Hi Carson,

Why don't you send the events (from Filebeat and MetricsBeat) to different ES indexes ?

https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html

Please use index field to set the value

I believe I am using different indexes, with this configuration:
beats-filter.conf:

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["ESHOST1:9200", "ESHOST2:9200", "ESHOST3:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

If I don't need to use logstash (which I think I can get away with using ingest node + grok processor) I could try sending directly to ES

Thanks for sharing Logstash config. It looks perfectly fine to me.
I suggest you to go to Kibana and look at the input data for metricsbeat index.
There should not be any conflicts with Filebeat and Metricsbeat running together and sending data to Logstash.

How do I go about doing that? I'm not sure what you mean by "look at the input data"

Update: Deleting the index using CURL seemed to work. For some reason the new index is not broken. Very strange issue that i'll continue to monitor.

If anyone has further ideas as to what's happening that'd be helpful

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.