Good Morning,
I have installed and started my filebeat(5.5) agent on a set of servers and I am seeing a very high CPU usage by Filebeat. I recently added the stanzas max_proc: 2 & scan_frequency: 30s because I read on another post that this would help, but It has not.
Thanks.
My configs are as follows:
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
# Paths that should be crawled and fetched. Glob based paths.
- input_type: log
paths:
- /some/logs/path/*.log
exclude_files: ['.fish.log$']
close_inactive: 10m
max_proc: 2
scan_frequency: 30s
multiline.pattern: '^[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}'
multiline.negate: true
multiline.match: after
fields_under_root: true
fields:
service: my_service
- input_type: log
paths:
- /some/logs/path/*.log
exclude_files: ['.fish.log$']
close_inactive: 10m
max_proc: 2
scan_frequency: 30s
multiline.pattern: '^[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}'
multiline.negate: true
multiline.match: after
fields_under_root: true
fields:
service: my_service
Filebeat.yml
#=========================== Filebeat prospectors ==============================
filebeat:
prospectors: []
config_dir: "/my/log/path"
#================================= General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
#tags: ["service-X", "web-tier"]
#fields:
# AppID:
#================================= Outputs =====================================
#------------------------------- File output -----------------------------------
#output.file:
# path: "/var/filebeat/data"
# filename: logdata
# rotate_every_kb: 10000
# number_of_files: 3
#----------------------------- Kafka output --------------------------------
output.kafka:
# initial brokers for reading cluster metadata
hosts: ["SET of HOSTS"]
#
# message topic selection + partitioning
# topic: "PR106659-OMH0-DEV-test01"
topic: "MY_TOPIC"
client_id: ClientA
#
# partition.round_robin:
# reachable_only: false
# required_acks: 1
# compression: gzip
# max_message_bytes: 1000000
#--------------------------- Elasticsearch output ------------------------------
#output.elasticsearch
# hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials
#protocol: "https"
#username: "elastic"
#password: ""
#================================= Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
logging.level: info
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]