ERROR instance/beat.go:1027 Exiting: 1 error: error loading config file: invalid config: yaml: line 85: did not find expected key Exiting: 1 error: error loading config file: invalid config: yaml: line 85: did not find expected key

I have configured my filebeat.yml as follows :

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the output.

#fields:
# env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "192.168.2.18:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.2.18:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

Unfortnately as i try running the setup :

$ filebeat setup -e 


2023-06-04T15:24:50.447-0400    INFO    [index-management]      idxmgmt/std.go:296   Loaded index template.                                 2023-06-04T15:24:50.456-0400    INFO    [index-management.ilm]  ilm/std.go:126       Index Alias filebeat-7.17.10 exists already.           Index setup finished.                                                                                                                       Loading dashboards (Kibana must be running and reachable)                                                                                   2023-06-04T15:24:50.456-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601                                        
2023-06-04T15:24:50.604-0400    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting pro
vider type not detected.                                                                                                                    
2023-06-04T15:24:52.867-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601                                        
2023-06-04T15:26:16.877-0400    INFO    instance/beat.go:881    Kibana dashboards successfully loaded.                                  
Loaded dashboards                                                                                                                           
2023-06-04T15:26:16.878-0400    WARN    [cfgwarn]       instance/beat.go:606    DEPRECATED: Setting up ML using Filebeat is going to be remo
ved. Please use the ML app to setup jobs. Will be removed in version: 8.0.0                                                                 
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.                                
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html    
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.                                                      
2023-06-04T15:26:16.878-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200
2023-06-04T15:26:16.903-0400    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 7.17.
0                                                                                                                                           
2023-06-04T15:26:16.903-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601                                        
2023-06-04T15:26:17.034-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled                                      
2023-06-04T15:26:17.079-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled                                      
2023-06-04T15:26:17.127-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled                                      
2023-06-04T15:26:17.128-0400    ERROR   instance/beat.go:1027   Exiting: 1 error: error loading config file: invalid config: yaml: line 85: 
did not find expected key                                                                                                                   
Exiting: 1 error: error loading config file: invalid config: yaml: line 85: did not find expected key             
                                                                                                                                            

However, the dashboard shows that it is loaded successfully, but still no logs on the dashboard

 
┌──(root㉿kali)-[/var/log/filebeat]
└─# filebeat setup --dashboards
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards

````

Similarly, auditbeat works perfectly using the same elasticsearch server host information:



2023-06-04T15:41:20.930-0400    INFO    [index-management]      idxmgmt/std.go:260      Auto ILM enable success.
2023-06-04T15:41:20.939-0400    INFO    [index-management.ilm]  ilm/std.go:170  ILM policy auditbeat exists already.
2023-06-04T15:41:20.941-0400    INFO    [index-management]      idxmgmt/std.go:396      Set setup.template.name to '{auditbeat-7.17.10 {now/d}-000001}' as ILM is enabled.
2023-06-04T15:41:20.944-0400    INFO    [index-management]      idxmgmt/std.go:401      Set setup.template.pattern to 'auditbeat-7.17.10-*' as ILM is enabled.
2023-06-04T15:41:20.947-0400    INFO    [index-management]      idxmgmt/std.go:435      Set settings.index.lifecycle.rollover_alias in template to {auditbeat-7.17.10 {now/d}-000001} as ILM is enabled.
2023-06-04T15:41:20.951-0400    INFO    [index-management]      idxmgmt/std.go:439      Set settings.index.lifecycle.name in template to {auditbeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2023-06-04T15:41:20.958-0400    INFO    template/load.go:197    Existing template will be overwritten, as overwrite is enabled.
2023-06-04T15:41:21.187-0400    INFO    template/load.go:131    Try loading template auditbeat-7.17.10 to Elasticsearch
2023-06-04T15:41:21.267-0400    INFO    template/load.go:123    Template with name "auditbeat-7.17.10" loaded.
2023-06-04T15:41:21.267-0400    INFO    [index-management]      idxmgmt/std.go:296      Loaded index template.
2023-06-04T15:41:21.297-0400    INFO    [index-management.ilm]  ilm/std.go:126  Index Alias auditbeat-7.17.10 exists already.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2023-06-04T15:41:21.297-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-04T15:41:21.706-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-04T15:41:23.818-0400    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:101    add_cloud_metadata: hosting provider type not detected.
2023-06-04T15:41:33.983-0400    INFO    instance/beat.go:881    Kibana dashboards successfully loaded.
Loaded dashboards

Hi @yash2

That error indicates a syntax error in the filebeat.yml or perhaps one of the modules.d/*.yml files (or did you put a .yml file in that directory as it will get read)

However, I just tested the filebeat.yml you showed it looks fine.

How did you install?

Did you enable any filebeat modules?

Are you sure you saved it :slight_smile:

I just tested it that yml is fine.... so perhaps there is another .yml file being read.

filebeat expects Elasticsearch to be the same version or newer than the Beat.
Lift the version restriction by setting allow_older_versions to true.
allow_older_versions: false

Do you have different versions FB and ES? If you do, try to set allow_older_versions: true in filebeat.yml

Here are list of enabled modules under the directory, /etc/modules :

┌──(root㉿kali)-[/etc/filebeat/modules.d]

└─# ls  

activemq.yml.disabled          haproxy.yml.disabled          osquery.yml.disabled
apache.yml.disabled            ibmmq.yml.disabled            panw.yml.disabled
auditd.yml.disabled            icinga.yml.disabled           pensando.yml.disabled
awsfargate.yml.disabled        iis.yml.disabled              postgresql.yml.disabled
aws.yml.disabled               imperva.yml.disabled          proofpoint.yml.disabled
azure.yml.disabled             infoblox.yml.disabled         rabbitmq.yml.disabled
barracuda.yml.disabled         iptables.yml.disabled         radware.yml.disabled
bluecoat.yml.disabled          juniper.yml.disabled          redis.yml.disabled
cef.yml.disabled               kafka.yml.disabled            santa.yml.disabled
checkpoint.yml.disabled        kibana.yml.disabled           snort.yml.disabled
cisco.yml.disabled             logstash.yml.disabled         snyk.yml.disabled
coredns.yml.disabled           microsoft.yml.disabled        sonicwall.yml.disabled
crowdstrike.yml.disabled       misp.yml.disabled             sophos.yml.disabled
cyberarkpas.yml.disabled       mongodb.yml.disabled          squid.yml.disabled
cyberark.yml.disabled          mssql.yml.disabled            #  suricata.yml
cylance.yml.disabled           mysqlenterprise.yml.disabled  system.yml.disabled
elasticsearch.yml.disabled     mysql.yml.disabled            threatintel.yml.disabled
envoyproxy.yml.disabled        nats.yml.disabled             tomcat.yml.disabled
f5.yml.disabled                netflow.yml.disabled          traefik.yml.disabled
fortinet.yml.disabled          netscout.yml.disabled        #zeek.yml
gcp.yml.disabled             # nginx.yml                          
googlecloud.yml.disabled       o365.yml.disabled             zoom.yml.disabled
google_workspace.yml.disabled  okta.yml.disabled             zscaler.yml.disabled
gsuite.yml.disabled            oracle.yml.disabled
 



  $ sudo apt install -y filebeat

Reading package lists... Done                                                        
Building dependency tree... Done
Reading state information... Done                                                    
The following packages were automatically installed and are no longer required:
bluez-firmware debugedit dh-elpa-helper docutils-common figlet finger firebird3.0-common firebird3.0-common-doc firmware-ath9k-htc firmware-atheros firmware-brcm80211
firmware-intel-sound firmware-iwlwifi firmware-libertas firmware-realtek firmware-sof-signed firmware-ti-connectivity firmware-zd1211 




┌──(root㉿kali)-[/home/kali]           
                                                    
└─# dpkg  -s filebeat                                                                      
Package: filebeat                                                                          
Status: install ok installed                                                               
Priority: extra                                                                            
Section: default                                                                           
Installed-Size: 129158                                                                     
Maintainer: <@2a31438e1743>                                                                
Architecture: amd64                          
Version: 7.17.10                             
Conffiles:                                   
 /etc/filebeat/fields.yml 38e878e82081430520b99597


Here is my zeek.yml file `: 

# Module: zeek
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-zeek.html

- module: zeek
  capture_loss:
    enabled: true
  connection:
    enabled: true
  dce_rpc:
    enabled: true
  dhcp:
    enabled: true
  dnp3:
    enabled: true
  dns:
    enabled: true
  dpd:
    enabled: true
  files:
    enabled: true
  ftp:
    enabled: true
  http:
    enabled: true
  intel:
    enabled: true
  irc:
    enabled: true
  kerberos:
    enabled: true
  modbus:
    enabled: true
  mysql:
    enabled: true
  notice:
    enabled: true
  ntp:
    enabled: true
  ntlm:
    enabled: true
  ocsp:
    enabled: true
  pe:
    enabled: true
  radius:
    enabled: true
  rdp:
    enabled: true
  rfb:
    enabled: true
  signature:
    enabled: true
  sip:
    enabled: true
  smb_cmd:
    enabled: true
  smb_files:
    enabled: true
  smb_mapping:
    enabled: true
  smtp:
    enabled: true
  snmp:
    enabled: true
  socks:
    enabled: true
  ssh:
    enabled: true
  ssl:
    enabled: true
  stats:
    enabled: true
  syslog:
    enabled: true
  traceroute:
    enabled: true
  tunnel:
    enabled: true
  weird:
    enabled: true
  x509:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
      var.paths: [/var/log/zeek/logs/current/*.log]

Here is my suricata.yml file : 
# Module: suricata
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-suricata.html

- module: suricata
  # All logs
  eve:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

Here after disabling the zeek.yml file ; 


┌──(root㉿kali)-[/etc/filebeat/modules.d]                                                                                                                                              

└─# sudo filebeat modules disable nginx                                                                                                                                                

Disabled nginx                   




Index setup finished.                                                                                                                                                                  
Loading dashboards (Kibana must be running and reachable)                                                                                                                              
2023-06-05T14:13:20.601-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601                                                                                   
2023-06-05T14:13:23.486-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601                                                                                   
2023-06-05T14:14:45.408-0400    INFO    instance/beat.go:881    Kibana dashboards successfully loaded.                                                                                 
Loaded dashboards                                                                                                                                                                      
2023-06-05T14:14:45.408-0400    WARN    [cfgwarn]       instance/beat.go:606    DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. W
ill be removed in version: 8.0.0                                                                                                                                                       
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.                                                                           
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html                                                                                                          
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.                                                                                                 
2023-06-05T14:14:45.408-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200                                                    
2023-06-05T14:14:45.446-0400    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 7.17.0                                          
2023-06-05T14:14:45.446-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601                                                                                   
2023-06-05T14:14:45.607-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled
2023-06-05T14:14:45.636-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled                                                         
2023-06-05T14:14:45.636-0400    ERROR   instance/beat.go:1027   Exiting: 1 error: error loading config file: invalid config: yaml: line 85: did not find expected key                  
Exiting: 1 error: error loading config file: invalid config: yaml: line 85: did not find expected key

Not sure exactly what that nomenclature means but there is perhaps bad syntax in one of those... as the get concatenated...

Basically @yash2 You got a syntax error in your .yml open them in a .yml editor is should help

OR disable all those modules and then see if you have the same error
Then enable them 1 at a time till you fine the bad .yml

Disabled all the mentioned modules and tried again, still nothing :



Loading dashboards (Kibana must be running and reachable)                                                                                                                              
2023-06-05T14:28:58.076-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601                                                                                   
2023-06-05T14:28:58.865-0400    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:101    add_cloud_metadata: hosting provider type not detected.                
2023-06-05T14:29:00.643-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601                                                                                   
2023-06-05T14:30:20.838-0400    INFO    instance/beat.go:881    Kibana dashboards successfully loaded.                                                                                 
Loaded dashboards                                                                                                                                                                      
2023-06-05T14:30:20.838-0400    WARN    [cfgwarn]       instance/beat.go:606    DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. W
ill be removed in version: 8.0.0                                                                                                                                                       
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.                                                                           
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html                                                                                                          
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.                                                                                                 
2023-06-05T14:30:20.838-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200                                                    
2023-06-05T14:30:20.870-0400    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 7.17.0                                          
2023-06-05T14:30:20.870-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601                                                                                   
2023-06-05T14:30:20.959-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled                                                                                 
2023-06-05T14:30:20.960-0400    ERROR   instance/beat.go:1027   Exiting: 1 error: error loading config file: invalid config: yaml: line 85: did not find expected key                  
Exiting: 1 error: error loading config file: invalid config: yaml: line 85: did not find expected key

As a last resort, i feel that i am going to work with filebeat docker.

@yash2 ... you got a syntax error... nothing to do with docker or not....

Exactly what version of filebeat?

Here is the filbeat.yml I ran against filebeat-7.17.10,
I just changed your IP addresses
and it ran fine

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the output.

#fields:
# env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

The results with a fresh install, did you edit some other files?

Can you try a fresh install?

hyperion:filebeat-7.17.10-darwin-x86_64 sbrown$ ./filebeat setup -e filebeat-discuss.yml 
2023-06-05T12:43:23.252-0700    INFO    instance/beat.go:698    Home path: [/Users/sbrown/workspace/elastic-install/tmp/filebeat-7.17.10-darwin-x86_64] Config path: [/Users/sbrown/workspace/elastic-install/tmp/filebeat-7.17.10-darwin-x86_64] Data path: [/Users/sbrown/workspace/elastic-install/tmp/filebeat-7.17.10-darwin-x86_64/data] Logs path: [/Users/sbrown/workspace/elastic-install/tmp/filebeat-7.17.10-darwin-x86_64/logs] Hostfs Path: [/]
2023-06-05T12:43:23.292-0700    INFO    instance/beat.go:706    Beat ID: 8cd86d58-e132-4000-9870-1b338421d927
2023-06-05T12:43:26.297-0700    WARN    [add_cloud_metadata]    add_cloud_metadata/provider_aws_ec2.go:79       read token request for getting IMDSv2 token returns empty: Put "http://169.254.169.254/latest/api/token": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.
2023-06-05T12:43:26.327-0700    INFO    [beat]  instance/beat.go:1052   Beat info       {"system_info": {"beat": {"path": {"config": "/Users/sbrown/workspace/elastic-install/tmp/filebeat-7.17.10-darwin-x86_64", "data": "/Users/sbrown/workspace/elastic-install/tmp/filebeat-7.17.10-darwin-x86_64/data", "home": "/Users/sbrown/workspace/elastic-install/tmp/filebeat-7.17.10-darwin-x86_64", "logs": "/Users/sbrown/workspace/elastic-install/tmp/filebeat-7.17.10-darwin-x86_64/logs"}, "type": "filebeat", "uuid": "8cd86d58-e132-4000-9870-1b338421d927"}}}
2023-06-05T12:43:26.327-0700    INFO    [beat]  instance/beat.go:1061   Build info      {"system_info": {"build": {"commit": "78a342312954e587301b653093954ff7ee4d4f2b", "libbeat": "7.17.10", "time": "2023-04-23T09:00:42.000Z", "version": "7.17.10"}}}
2023-06-05T12:43:26.327-0700    INFO    [beat]  instance/beat.go:1064   Go runtime info {"system_info": {"go": {"os":"darwin","arch":"amd64","max_procs":16,"version":"go1.19.7"}}}
2023-06-05T12:43:26.328-0700    INFO    [beat]  instance/beat.go:1070   Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2023-05-22T07:31:40.769773-07:00","name":"hyperion","ip":["127.0.0.1","::1","fe80::1","fe80::aede:48ff:fe00:1122","fe80::845:5c5f:d2ee:5f2c","192.168.2.108","fe80::3040:a6ff:febe:7182","fe80::3040:a6ff:febe:7182","fe80::45c2:7c31:4a79:a1ba","fe80::10af:f79a:93ef:7d16","fe80::ce81:b1c:bd2c:69e"],"kernel_version":"22.5.0","mac":["ac:de:48:00:11:22","82:b2:58:49:30:04","82:b2:58:49:30:05","82:b2:58:49:30:01","82:b2:58:49:30:00","a0:ce:c8:51:95:38","82:b2:58:49:30:01","7e:52:30:9c:ef:e0","5c:52:30:9c:ef:e0","32:40:a6:be:71:82","32:40:a6:be:71:82"],"os":{"type":"macos","family":"darwin","platform":"darwin","name":"macOS","version":"13.4","major":13,"minor":4,"patch":0,"build":"22F66"},"timezone":"PDT","timezone_offset_sec":-25200,"id":"9E46F076-B7F1-53AA-921B-C2F983746B79"}}}
2023-06-05T12:43:26.328-0700    INFO    [beat]  instance/beat.go:1099   Process info    {"system_info": {"process": {"cwd": "/Users/sbrown/workspace/elastic-install/tmp/filebeat-7.17.10-darwin-x86_64", "exe": "./filebeat", "name": "filebeat", "pid": 96334, "ppid": 86983, "start_time": "2023-06-05T12:43:21.031-0700"}}}
2023-06-05T12:43:26.328-0700    INFO    instance/beat.go:292    Setup Beat: filebeat; Version: 7.17.10
2023-06-05T12:43:26.328-0700    INFO    [index-management]      idxmgmt/std.go:184      Set output.elasticsearch.index to 'filebeat-7.17.10' as ILM is enabled.
2023-06-05T12:43:26.329-0700    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://localhost:9200
2023-06-05T12:43:26.330-0700    INFO    [publisher]     pipeline/module.go:113  Beat name: hyperion
2023-06-05T12:43:26.332-0700    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://localhost:9200
2023-06-05T12:43:26.337-0700    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 8.8.0
Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.

2023-06-05T12:43:26.339-0700    INFO    [index-management]      idxmgmt/std.go:260      Auto ILM enable success.
2023-06-05T12:43:26.341-0700    INFO    [index-management.ilm]  ilm/std.go:170  ILM policy filebeat exists already.
2023-06-05T12:43:26.341-0700    INFO    [index-management]      idxmgmt/std.go:396      Set setup.template.name to '{filebeat-7.17.10 {now/d}-000001}' as ILM is enabled.
2023-06-05T12:43:26.341-0700    INFO    [index-management]      idxmgmt/std.go:401      Set setup.template.pattern to 'filebeat-7.17.10-*' as ILM is enabled.
2023-06-05T12:43:26.341-0700    INFO    [index-management]      idxmgmt/std.go:435      Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.17.10 {now/d}-000001} as ILM is enabled.
2023-06-05T12:43:26.341-0700    INFO    [index-management]      idxmgmt/std.go:439      Set settings.index.lifecycle.name in template to {filebeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2023-06-05T12:43:26.343-0700    INFO    template/load.go:197    Existing template will be overwritten, as overwrite is enabled.
2023-06-05T12:43:27.232-0700    INFO    template/load.go:131    Try loading template filebeat-7.17.10 to Elasticsearch
2023-06-05T12:43:27.398-0700    INFO    template/load.go:123    Template with name "filebeat-7.17.10" loaded.
2023-06-05T12:43:27.398-0700    INFO    [index-management]      idxmgmt/std.go:296      Loaded index template.
2023-06-05T12:43:27.596-0700    INFO    [index-management.ilm]  ilm/std.go:140  Index Alias filebeat-7.17.10 successfully created.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2023-06-05T12:43:27.597-0700    INFO    kibana/client.go:180    Kibana url: http://localhost:5601
2023-06-05T12:43:28.715-0700    INFO    kibana/client.go:180    Kibana url: http://localhost:5601
2023-06-05T12:43:29.298-0700    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:101    add_cloud_metadata: hosting provider type not detected.
2023-06-05T12:44:46.369-0700    INFO    instance/beat.go:881    Kibana dashboards successfully loaded.
Loaded dashboards
2023-06-05T12:44:46.369-0700    WARN    [cfgwarn]       instance/beat.go:606    DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. Will be removed in version: 8.0.0
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.
2023-06-05T12:44:46.369-0700    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://localhost:9200
2023-06-05T12:44:46.372-0700    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 8.8.0
2023-06-05T12:44:46.372-0700    INFO    kibana/client.go:180    Kibana url: http://localhost:5601
2023-06-05T12:44:46.378-0700    INFO    fileset/modules.go:454  Skipping loading machine learning jobs because of Elasticsearch version is too new.
It must be 7.x for setting up it using Beats. Please use the Machine Learning UI in Kibana.
Loaded machine learning job configurations
2023-06-05T12:44:46.380-0700    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://localhost:9200
2023-06-05T12:44:46.383-0700    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 8.8.0
2023-06-05T12:44:46.383-0700    INFO    cfgfile/reload.go:262   Loading of config files completed.
Loaded Ingest pipelines
hyperion:filebeat-7.17.10-darwin-x86_64 sbrown$ 

@stephenb can you check the FB line? Maybe the config param allow_older_versions: true is missing in filebeat.yml?

ERROR instance/beat.go:1027 ERROR instance/beat.go:1027 Exiting: 1 error

Also, there is a topic where a module wrong param cause issue. However all modules are disabled in yash case.

Hi @Rios

The link to the code is the wrong version here is the correct version 7.17.0 Which is the generic error handler, not the allow_older_version (you did not select the correct label of the code)

There is still simply bad .yml somewhere those places mostly likely are

filebeat.yml
*.yml
modules.d/*.yml
module/**/*.yml

OR if some other ymls like under the module directory were touched / changed I have seen that before. This can specifically cause filbeat setup -e to fail

Uninstalling and reinstalling with a package managed may not clean up these files as they are considered configuration files.

When @yash2 Want to provide more detail and tests perhaps we can help.

Exact Version of Elasticsearch

Confirm they have tried the version of the filebeat.yml I have provided.

Completely clean / removed all the filebeat then reinstalled etc. Completely clean going in an manually cleaning all the directories etc

Also they can try running filebeat -e without the setup that will help isolate if that runs and setup does not that will tell us something.

Tried a completely separate tar.gz install to test

There is something basic going on ... which can sometimes be hard to find.

We can take a look,,, with more data...

There is bad yml somewhere...

1 Like

Good point. Another possibility is to set the debug log

Also FB can be simply run from gzip https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.10-linux-x86_64.tar.gz as a process.

Hey guys this worked while setting this up :

┌──(root㉿kali)-[/home/kali]

└─# service filebeat stop        



# Removing all the "filbeat" components : 

┌──(root㉿kali)-[/home/kali]


└─# sudo rm -rf /usr/share/filebeat
                                       


# Removing all the filbeat configuration : 

    $ sudo rm -rf /etc/filebeat




# The best option offers to us, would be to "uninstall the existing package" and use the "link" below to load in the new ".gzip filebeat package" for installation ;


- Let`s proceed with unpacking this "filebeat .gzip file" : 



┌──(root㉿kali)-[/opt]

└─# tar -xzf filebeat-7.17.10-linux-x86_64.tar.gz




# Move into the extracted Directory : 


┌──(root㉿kali)-[/opt]

└─# cd  filebeat-7.17.10-linux-x86_64 
                                                


# We'll now move the "extracted filebeat files" to its appropriate location : 


┌──(root㉿kali)-[/opt/filebeat-7.17.10-linux-x86_64]                  

└─# ls           

fields.yml  filebeat  filebeat.reference.yml  filebeat.yml  kibana  LICENSE.txt  module  modules.d  NOTICE.txt  README.md        



# Copying the extracted Filebeat files : 


┌──(root㉿kali)-[/opt/filebeat-7.17.10-linux-x86_64]

└─# sudo cp -R * /usr/share/filebeat        




# Let's "mkdir" a directory named "/etc/filebeat", and create a "filebeat.yml file" within this path : 



  $ mkdir /etc/filebeat



# Create a "file" named filebeat.yml (Simply "copy and paste" the same "configuration filebeat.yml" from previous installation) : 


  $ gedit filbeat.yml



# We're now good to run "filebeat" :



  $ service filebeat start



  # Then let's run the "filebeat" setup :


- Let's head into the "filebeat" directory : 

    $ cd  /usr/share/filebeat 


Running the filebeat setup : 


    $ ./filebeat setup -e 

    2023-06-07T13:49:29.192-0400    INFO    template/load.go:123    Template with name "filebeat-7.17.10" loaded.
2023-06-07T13:49:29.192-0400    INFO    [index-management]      idxmgmt/std.go:296      Loaded index template.
2023-06-07T13:49:29.198-0400    INFO    [index-management.ilm]  ilm/std.go:126  Index Alias filebeat-7.17.10 exists already.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2023-06-07T13:49:29.198-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-07T13:49:30.087-0400    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:101    add_cloud_metadata: hosting provider type not detected.
2023-06-07T13:49:31.871-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-07T13:50:50.796-0400    INFO    instance/beat.go:881    Kibana dashboards successfully loaded.
Loaded dashboards
2023-06-07T13:50:50.796-0400    WARN    [cfgwarn]       instance/beat.go:606    DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. Will be removed in version: 8.0.0
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.
2023-06-07T13:50:50.796-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200
2023-06-07T13:50:50.800-0400    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 7.17.0
2023-06-07T13:50:50.801-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-07T13:50:50.850-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled
Loaded machine learning job configurations
2023-06-07T13:50:50.851-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200
2023-06-07T13:50:50.857-0400    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 7.17.0
2023-06-07T13:50:50.858-0400    INFO    cfgfile/reload.go:262   Loading of config files completed.

Unfortunately no logs are being pulled up yet on Discover Elasticsearch....

Here is the error message, however, went through the logs :

$ journalctl --unit filebeat.service 



Jun 07 14:02:11 kali (filebeat)[73097]: filebeat.service: Failed at step EXEC spawning /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:11 kali systemd[1]: filebeat.service: Main process exited, code=exited, status=203/EXEC
Jun 07 14:02:11 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:11 kali systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 3.
Jun 07 14:02:11 kali systemd[1]: Stopped filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:12 kali (filebeat)[73107]: filebeat.service: Failed to locate executable /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:12 kali systemd[1]: Started filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:12 kali (filebeat)[73107]: filebeat.service: Failed at step EXEC spawning /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:12 kali systemd[1]: filebeat.service: Main process exited, code=exited, status=203/EXEC
Jun 07 14:02:12 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:12 kali systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 4.
Jun 07 14:02:12 kali systemd[1]: Stopped filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:12 kali systemd[1]: Started filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:12 kali (filebeat)[73108]: filebeat.service: Failed to locate executable /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:12 kali (filebeat)[73108]: filebeat.service: Failed at step EXEC spawning /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:12 kali systemd[1]: filebeat.service: Main process exited, code=exited, status=203/EXEC
Jun 07 14:02:12 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:12 kali systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 5.
Jun 07 14:02:12 kali systemd[1]: Stopped filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:12 kali systemd[1]: filebeat.service: Start request repeated too quickly.
Jun 07 14:02:12 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:12 kali systemd[1]: Failed to start filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:30 kali (filebeat)[73279]: filebeat.service: Failed to locate executable /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:30 kali (filebeat)[73279]: filebeat.service: Failed at step EXEC spawning /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:30 kali systemd[1]: Started filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:30 kali systemd[1]: filebeat.service: Main process exited, code=exited, status=203/EXEC
Jun 07 14:02:30 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:30 kali systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 1.
Jun 07 14:02:30 kali systemd[1]: Stopped filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:30 kali (filebeat)[73282]: filebeat.service: Failed to locate executable /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:30 kali (filebeat)[73282]: filebeat.service: Failed at step EXEC spawning /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:30 kali systemd[1]: Started filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:30 kali systemd[1]: filebeat.service: Main process exited, code=exited, status=203/EXEC
Jun 07 14:02:30 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:30 kali systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 2.
Jun 07 14:02:30 kali systemd[1]: Stopped filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:30 kali systemd[1]: Started filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:30 kali systemd[1]: filebeat.service: Main process exited, code=exited, status=203/EXEC
Jun 07 14:02:30 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:30 kali systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 3.
Jun 07 14:02:30 kali systemd[1]: Stopped filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:31 kali (filebeat)[73284]: filebeat.service: Failed to locate executable /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:31 kali systemd[1]: Started filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:31 kali (filebeat)[73284]: filebeat.service: Failed at step EXEC spawning /usr/share/filebeat/bin/filebeat: No such file or directory
Jun 07 14:02:31 kali systemd[1]: filebeat.service: Main process exited, code=exited, status=203/EXEC
Jun 07 14:02:31 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:31 kali systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 4.
Jun 07 14:02:31 kali systemd[1]: Stopped filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:31 kali systemd[1]: Started filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:31 kali systemd[1]: filebeat.service: Main process exited, code=exited, status=203/EXEC
Jun 07 14:02:31 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:31 kali systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 5.
Jun 07 14:02:31 kali systemd[1]: Stopped filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 07 14:02:31 kali systemd[1]: filebeat.service: Start request repeated too quickly.
Jun 07 14:02:31 kali systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 07 14:02:31 kali systemd[1]: Failed to start filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch..

Logstash is currently disabled on my OS : 

┌──(root㉿kali)-[/usr/share/filebeat/module]

└─# service logstash status  
○ logstash.service - logstash
     Loaded: loaded (/etc/systemd/system/logstash.service; disabled; preset: disabled)
     Active: inactive (dead)


                                   





Any solution for this please.

Actually at the /usr/share/filebeat/, missing the bin directory , not sure how to proceed from there :

┌──(root㉿kali)-[/usr/share/filebeat]
└─# ls           
data  fields.yml  filebeat  filebeat.reference.yml  filebeat.yml  kibana  LICENSE.txt  module  modules.d  NOTICE.txt  README.md

@yash2

I /We did not want to move all the tar.gz files around and try to build up the package install directory.. that is exactly not what we wanted to test...

I just wanted you to download the tar.gz
Un=tar it
then run the commands from that directory to test

Unfortunately with all you did, I have no idea where the issue is...

Again we are just trying to find some bad .yml....

Could you just remove / uninstall everything... everything

Remove the package, remove the directory where you put everything we are just trying to test...

could you simply from your home directory

mkdir tmp

cd tmp

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.10-linux-x86_64.tar.gz

tar xzvf filebeat-7.17.10-linux-x86_64.tar.gz

cd filebeat-7.17.10-linux-x86_64

# EDIT filebeat.yml only change the kibana and elasticsearch hosts

./filebeat setup -e

./filebeat -e

That is all we want you to do and let us know the results....

The tar.gz uses only own independent sub-directories and useful for a quick testing, nothing more.

If you remove/uninstall , /etc/filebeat will most likely to stay. Rename it, temporary.

Here you go sir , the setup works , and the ingest pipelines has loaded :

So here are my steps as you suggested  : 

┌──(root㉿kali)-[/home/kali]                                                                                                       [192/216]
└─# cd ~/tmp                                                                                                                                
                                                                                                                                            
┌──(root㉿kali)-[~/tmp]                                                                                                                     
└─# ls                                                                                                                                      
filebeat-7.17.10-linux-x86_64  filebeat-7.17.10-linux-x86_64.tar.gz                                                                                                                                                                                                                     


┌──(root㉿kali)-[~/tmp]                                                                                                                     
└─# cd filebeat-7.17.10-linux-x86_64                                                                                                        
                                                                                                                                            
┌──(root㉿kali)-[~/tmp/filebeat-7.17.10-linux-x86_64]                                                                                       
└─# ls                                                                                                                                      
fields.yml              filebeat.yml  module      README.md                                                                                 
filebeat                kibana        modules.d                                                                                             
filebeat.reference.yml  LICENSE.txt   NOTICE.txt                                                                                            
                                                                

023-06-07T18:17:18.556-0400    INFO    template/load.go:197    Existing template will be overwritten, as overwrite is enabled.
2023-06-07T18:17:21.182-0400    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:101    add_cloud_metadata: hosting provider type not detected.
2023-06-07T18:17:24.631-0400    INFO    template/load.go:131    Try loading template filebeat-7.17.10 to Elasticsearch
2023-06-07T18:17:24.999-0400    INFO    template/load.go:123    Template with name "filebeat-7.17.10" loaded.
2023-06-07T18:17:24.999-0400    INFO    [index-management]      idxmgmt/std.go:296      Loaded index template.
2023-06-07T18:17:25.060-0400    INFO    [index-management.ilm]  ilm/std.go:126  Index Alias filebeat-7.17.10 exists already.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2023-06-07T18:17:25.060-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-07T18:17:31.994-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
 2023-06-07T18:19:17.408-0400   INFO    instance/beat.go:881    Kibana dashboards successfully loaded.
Loaded dashboards
2023-06-07T18:19:17.408-0400    WARN    [cfgwarn]       instance/beat.go:606  DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. Will be removed in version: 8.0.0
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.
2023-06-07T18:19:17.408-0400    INFO    [esclientleg]   eslegclient/connection.go:105  elasticsearch url: http://192.168.2.18:9200
2023-06-07T18:19:17.447-0400    INFO    [esclientleg]   eslegclient/connection.go:285  Attempting to connect to Elasticsearch version 7.17.0
2023-06-07T18:19:17.450-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-07T18:19:17.663-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled
Loaded machine learning job configurations
2023-06-07T18:19:17.727-0400    INFO    [esclientleg]   eslegclient/connection.go:105  elasticsearch url: http://192.168.2.18:9200
2023-06-07T18:19:17.751-0400    INFO    [esclientleg]   eslegclient/connection.go:285  Attempting to connect to Elasticsearch version 7.17.0
2023-06-07T18:19:17.752-0400    INFO    cfgfile/reload.go:262   Loading of config files completed.
Loaded Ingest pipelines


This went through the setup successfully, when i sidetracked with /usr/share/filebeat, but i was 
using the  ttps://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.10-linux-x86_64.tar.gz as mentioned. Still when after the setup ran successfully, there is no logs being pulled on Discover.However new index pattern is being generated, filebeat-*

Sorry I do not follow....

Did you enable any filebeat inputs in filebeat.yml ... if not nothing will be loaded.

Where / how did you start filebeat? You did not show the command ...

I am having trouble following because you do not show the command and the console output?

Apologies I can not follow ... I gave simple commands and not sure why you are trying to run from a different directory.... and not showing the output ... did you run ./filebeeat from the directory etc.

I can not help if you do not follow the instructions... perhaps @Rios s or someone else can help

My Last try... Please try this as your filebeat.yml just change the hosts
and run the following command from the filebeat directory you un-tared in
Show the command and the output

./filebeat -e

with this as your filebeat.yml just setting your hosts...

filebeat.inputs:
- type: filestream
  id: my-filestream-id
  enabled: true
  paths:
    - /var/log/*.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1
 
setup.kibana:
  host: "localhost:5601"

output.elasticsearch:
  hosts: ["localhost:9200"]
1 Like

Still working with the same filebeat.configuration, where all the the settings are set as per your request ...No modules enabled, still working with filebeat 7.17.10. I'm still with you on the same page, and i have been following your steps from the very beginning ....