Unable to start filebeats

I get the below error when I tried to start the filebeat.

ā— filebeat.service - filebeat
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2018-12-05 06:15:07 EST; 1h 3min ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 29065 ExecStart=/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat (code=exited, status=1/FAILURE)
Main PID: 29065 (code=exited, status=1/FAILURE)

Dec 05 06:15:07 aeaqa1 systemd[1]: Unit filebeat.service entered failed state.
Dec 05 06:15:07 aeaqa1 systemd[1]: filebeat.service failed.
Dec 05 06:15:07 aeaqa1 systemd[1]: filebeat.service holdoff time over, scheduling restart.
Dec 05 06:15:07 aeaqa1 systemd[1]: start request repeated too quickly for filebeat.service
Dec 05 06:15:07 aeaqa1 systemd[1]: Failed to start filebeat.
Dec 05 06:15:07 aeaqa1 systemd[1]: Unit filebeat.service entered failed state.
Dec 05 06:15:07 aeaqa1 systemd[1]: filebeat.service failed.

Here is the YAML file

    #=========================== Filebeat prospectors =============================

    filebeat.prospectors:

    # Each - is a prospector. Most options can be set at the prospector level, so
    # you can use different prospectors for various configurations.
    # Below are the prospector specific configurations.

    - type: log

      # Change to true to enable this prospector configuration.
      enabled: false

      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - /u01/wildfly-log/*.log
        #- c:\programdata\elasticsearch\logs\*

      # Exclude lines. A list of regular expressions to match. It drops the lines that are
      # matching any regular expression from the list.
      #exclude_lines: ['^DBG']

      # Include lines. A list of regular expressions to match. It exports the lines that are
      # matching any regular expression from the list.
      #include_lines: ['^ERR', '^WARN']

      # Exclude files. A list of regular expressions to match. Filebeat drops the files that
      # are matching any regular expression from the list. By default, no files are dropped.
      #exclude_files: ['.gz$']

      # Optional additional fields. These fields can be freely picked
      # to add additional information to the crawled log files for filtering
      fields:
        log_type: errorlog

      #  level: debug
      #  review: 1
    - type: log
      enabled: true
      paths:
        - /u01/wildfly9-log/*.log
      fields:
        log_type: serverlog

      ### Multiline options

      # Mutiline can be used for log messages spanning multiple lines. This is common
      # for Java Stack Traces or C-Line Continuation

      # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
      #multiline.pattern: ^\[

      # Defines if the pattern set under pattern should be negated or not. Default is false.
      #multiline.negate: false

      # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
      # that was (not) matched before or after or as long as a pattern is not matched based on negate.
      # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
      #multiline.match: after


    #============================= Filebeat modules ===============================

    filebeat.config.modules:
      # Glob pattern for configuration loading
      path: ${path.config}/modules.d/*.yml

      # Set to true to enable config reloading
      reload.enabled: false

      # Period on which files under path should be checked for changes
      #reload.period: 10s

    #==================== Elasticsearch template setting ==========================

    setup.template.settings:
      index.number_of_shards: 3
      #index.codec: best_compression
      #_source.enabled: false

    #============================== Dashboards =====================================
    # These settings control loading the sample dashboards to the Kibana index. Loading
    # the dashboards is disabled by default and can be enabled either by setting the
    # options here, or by using the `-setup` CLI flag or the `setup` command.
    #setup.dashboards.enabled: false

    # The URL from where to download the dashboards archive. By default this URL
    # has a value which is computed based on the Beat name and version. For released
    # versions, this URL points to the dashboard archive on the artifacts.elastic.co
    # website.
    #setup.dashboards.url:

    #============================== Kibana =====================================

    # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
    # This requires a Kibana endpoint configuration.
    setup.kibana:

      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      #host: "localhost:5601"

    #================================ Outputs =====================================

    # Configure what output to use when sending the data collected by the beat.

    #-------------------------- Elasticsearch output ------------------------------
    #output.elasticsearch:
      # Array of hosts to connect to.
    #  hosts: ["192.168.22.22:9200"]

      # Optional protocol and basic auth credentials.
      #protocol: "https"
      #username: "elastic"
      #password: "changeme"

    #----------------------------- Logstash output --------------------------------
    output.logstash:
      # The Logstash hosts
      hosts: ["192.168.22.22:5044"]

      # Optional SSL. By default is off.
      # List of root certificates for HTTPS server verifications
      #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

      # Certificate for SSL client authentication
      #ssl.certificate: "/etc/pki/client/cert.pem"

      # Client Certificate Key
      #ssl.key: "/etc/pki/client/cert.key"

Hi @Chandana,

It seems like there is some indentation problem in filebeat.yml file.

Could you please share the filebeat logs or messages logs for further investigation..
filebeat logs : /var/log/filebeat
messages logs: /var/log/messages

Regards,
Harsh Bajaj

Here is the log file I see

2018-12-06T04:10:29.181-0500	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1220,"time":1229},"total":{"ticks":5490,"time":5501,"value":5490},"user":{"ticks":4270,"time":4272}},"info":{"ephemeral_id":"a0c68062-989b-4f28-aaa7-287aa1788fbe","uptime":{"ms":91680007}},"memstats":{"gc_next":4194304,"memory_alloc":1962984,"memory_total":574737176}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.12,"15":0.06,"5":0.07,"norm":{"1":0.03,"15":0.015,"5":0.0175}}}}}}
2018-12-06T04:10:59.181-0500	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1230,"time":1230},"total":{"ticks":5500,"time":5505,"value":5500},"user":{"ticks":4270,"time":4275}},"info":{"ephemeral_id":"a0c68062-989b-4f28-aaa7-287aa1788fbe","uptime":{"ms":91710007}},"memstats":{"gc_next":4194304,"memory_alloc":1319136,"memory_total":574842760}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.07,"15":0.06,"5":0.06,"norm":{"1":0.0175,"15":0.015,"5":0.015}}}}}}
2018-12-06T04:11:29.181-0500	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1230,"time":1231},"total":{"ticks":5500,"time":5506,"value":5500},"user":{"ticks":4270,"time":4275}},"info":{"ephemeral_id":"a0c68062-989b-4f28-aaa7-287aa1788fbe","uptime":{"ms":91740007}},"memstats":{"gc_next":4194304,"memory_alloc":1513720,"memory_total":575037344}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.04,"15":0.05,"5":0.05,"norm":{"1":0.01,"15":0.0125,"5":0.0125}}}}}}
2018-12-06T04:11:55.317-0500	INFO	beater/filebeat.go:323	Stopping filebeat
2018-12-06T04:11:55.317-0500	INFO	crawler/crawler.go:109	Stopping Crawler
2018-12-06T04:11:55.317-0500	INFO	crawler/crawler.go:119	Stopping 0 prospectors
2018-12-06T04:11:55.317-0500	INFO	cfgfile/reload.go:222	Dynamic config reloader stopped
2018-12-06T04:11:55.317-0500	INFO	crawler/crawler.go:135	Crawler stopped
2018-12-06T04:11:55.317-0500	INFO	registrar/registrar.go:210	Stopping Registrar
2018-12-06T04:11:55.317-0500	INFO	registrar/registrar.go:165	Ending Registrar
2018-12-06T04:11:55.331-0500	INFO	instance/beat.go:308	filebeat stopped.
2018-12-06T04:11:55.331-0500	INFO	[monitoring]	log/log.go:132	Total non-zero metrics	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1230,"time":1233},"total":{"ticks":5500,"time":5508,"value":5500},"user":{"ticks":4270,"time":4275}},"info":{"ephemeral_id":"a0c68062-989b-4f28-aaa7-287aa1788fbe","uptime":{"ms":91766157}},"memstats":{"gc_next":4194304,"memory_alloc":1705488,"memory_total":575229112,"rss":12468224}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"elasticsearch"},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0},"writes":2},"system":{"cpu":{"cores":4},"load":{"1":0.03,"15":0.05,"5":0.05,"norm":{"1":0.0075,"15":0.0125,"5":0.0125}}}}}}
2018-12-06T04:11:55.331-0500	INFO	[monitoring]	log/log.go:133	Uptime: 25h29m26.158117529s
2018-12-06T04:11:55.331-0500	INFO	[monitoring]	log/log.go:110	Stopping metrics logging.

I replaced the default YML file and tried to start the filebeat again. I gives me the same error. :frowning:

Hi @Chandana
Please paste the default filebeat.yml file and /var/log/messages so that i can identify the issue.

Thanks,
Harsh

@harshbajaj16 Here is the default yml file.
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

Unable to paste it in single post hence made it to two.
#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

HI @Chandana,

Please change this parameter to true and restart the filebeat service.

Thanks,
Harsh Bajaj

@harshbajaj16
No Luck :frowning:

Here is the message i see in /var/log/messages

Dec  6 04:58:11 aeaqfundxnode1 systemd: Started filebeat.

Dec  6 04:58:11 aeaqfundxnode1 systemd: Starting filebeat...

Dec  6 04:58:11 aeaqfundxnode1 filebeat: Exiting: error loading config file: config file ("/etc/filebeat/filebeat.yml") must be owned by the beat user (uid=0) o                                                                                        r root

Dec  6 04:58:11 aeaqfundxnode1 systemd: filebeat.service: main process exited, code=exited, status=1/FAILURE

Dec  6 04:58:11 aeaqfundxnode1 systemd: Unit filebeat.service entered failed state.

Dec  6 04:58:11 aeaqfundxnode1 systemd: filebeat.service failed.

Dec  6 04:58:11 aeaqfundxnode1 systemd: filebeat.service holdoff time over, scheduling restart.

Dec  6 04:58:11 aeaqfundxnode1 systemd: Started filebeat.

Dec  6 04:58:11 aeaqfundxnode1 systemd: Starting filebeat...

Dec  6 04:58:11 aeaqfundxnode1 filebeat: Exiting: error loading config file: config file ("/etc/filebeat/filebeat.yml") must be owned by the beat user (uid=0) o

Hi @Chandana

Please check the permission of filebeat.yml file as per below logs.

Regards,
Harsh Bajaj

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.