Filebeat - Elasticsearch output is not configured

Hi team,

i'm facing an issue when setting up Filebeat 6.5.0 (testing on this version for an upcoming upgrade) in a Windows machine. I know that Filebeat is not the best to use it on Windows but i am just sending logs in plain text from a specific folder to test it.

Basically i want to send some logs to a logstash server, not directly to Elasticsearch.

The issue is that after commenting elasticsearch output and also including this line in settings "setup.template.enabled: false" in filebeat.yml i am getting this issue:

when running ".\filebeat.exe setup -e":

2021-09-08T09:11:53.728Z        DEBUG   [publish]       pipeline/consumer.go:137        start pipeline event consumer
2021-09-08T09:11:53.728Z        INFO    [publisher]     pipeline/module.go:110  Beat name: XX-XX-XXX-XX
2021-09-08T09:11:53.730Z        ERROR   instance/beat.go:824    Exiting: Template loading requested but the Elasticsearch output is not configured/enabled
Exiting: Template loading requested but the Elasticsearch output is not configured/enabled

I am sending test logs from C:\logs\events:

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - C:\logs\events

and also enabled ".\filebeat.exe modules enable system".

elasticsearch template disabled:

#==================== Elasticsearch template setting ==========================

setup.template.enabled: false
#================================ General =====================================

elasticsearch output disabled:

#-------------------------- Elasticsearch output ------------------------------
# output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

and logstash output enabled:

output.logstash:
  # The Logstash hosts
  hosts: ["10.2.10.124:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

On the other hand in Logstash server i included the input for beats:

input {
  beats {
    port => 5044
  }
}

and the connectivity between filebeat server and logstash is ok.

Any idea about this error?

Thank you.

Welcome to our community! :smiley:

6.5 is EOL, you should really be running 7.14 as latest, of 6.8 if you need 6.X.

It'd be useful if you could please post your entire filebeat.yml.

1 Like

I'm testing on 6.5.0 version for an upcoming upgrade. All the ELK stack is running 6.5.0 now.

This is my filebeat.yml:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - C:\logs\events

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.enabled: false
#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
#setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "10.154.18.189:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
# output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["10.2.10.124:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

And this is the "system" file into modules.d folder that i enabled just in case:

- module: system
  # Syslog
  syslog:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false

  # Authorization logs
  auth:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
1 Like

In case that is useful this is what i get when installing filebeat in PowerShell:

PS C:\Program Files\Filebeat> .\install-service-filebeat.ps1


__GENUS          : 2
__CLASS          : __PARAMETERS
__SUPERCLASS     :
__DYNASTY        : __PARAMETERS
__RELPATH        :
__PROPERTY_COUNT : 1
__DERIVATION     : {}
__SERVER         :
__NAMESPACE      :
__PATH           :
ReturnValue      : 5
PSComputerName   :

__GENUS          : 2
__CLASS          : __PARAMETERS
__SUPERCLASS     :
__DYNASTY        : __PARAMETERS
__RELPATH        :
__PROPERTY_COUNT : 1
__DERIVATION     : {}
__SERVER         :
__NAMESPACE      :
__PATH           :
ReturnValue      : 0
PSComputerName   :

Status      : Stopped
Name        : filebeat
DisplayName : filebeat

Also i tried to setup the index manually as explained in this link but i get an error regarding the option "--index-management" both in powershell and cmd.

However if i remove " --index-management" it seems working the command for the index but i get an error regarding kibana´s connection... i updated the yml before running the command to include the kibana's ip to the rest of the code.

Load the Elasticsearch index template | Winlogbeat Reference [7.14] | Elastic

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "10.2.10.121:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:
.\filebeat.exe setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["10.2.10.122.:9200"]'

But always I get the same issue when running:

".\filebeat.exe setup -e"

message:

2021-09-08T09:11:53.728Z        DEBUG   [publish]       pipeline/consumer.go:137        start pipeline event consumer
2021-09-08T09:11:53.728Z        INFO    [publisher]     pipeline/module.go:110  Beat name: XX-XX-XXX-XX
2021-09-08T09:11:53.730Z        ERROR   instance/beat.go:824    Exiting: Template loading requested but the Elasticsearch output is not configured/enabled
Exiting: Template loading requested but the Elasticsearch output is not configured/enabled

Hi @X_T welcome to the community.

In order to run setup the filebeat.yml the output.elasticsearch must because configured and the output.logstash must be comment out.

Setup loads artifacts (templates etc) directly into elasticsearch from filebeat.

After you run setup comment out the elasticsearch and then configure the output.logstash.

Also 6.5 is VERY old you should upgrade your entire stack ASAP...

1 Like

Thanks for your answer stephenb. Before upgrading i need to perform some tests.

I commented the logstash output and also specified the elasticsearch and Kibana IPs because it seems that it needed kibana's ip as well.

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - C:\logs\events

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "10.2.10.121:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["10.2.10.122:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

but i am getting this error: when running:

.\filebeat.exe setup -e
2021-09-08T14:29:43.242Z        INFO    template/load.go:129    Template already exists and will not be overwritten.
Loaded index template
Loading dashboards (Kibana must be running and reachable)
2021-09-08T14:29:43.245Z        INFO    elasticsearch/client.go:163     Elasticsearch url: http://10.2.10.122:9200
2021-09-08T14:29:43.245Z        DEBUG   [elasticsearch] elasticsearch/client.go:688     ES Ping(url=http://10.2.10.122:9200)
2021-09-08T14:29:43.343Z        DEBUG   [elasticsearch] elasticsearch/client.go:711     Ping status code: 200
2021-09-08T14:29:43.343Z        INFO    elasticsearch/client.go:712     Connected to Elasticsearch version 6.5.0
2021-09-08T14:29:43.345Z        DEBUG   [dashboards]    dashboards/es_loader.go:329     Initialize the Elasticsearch 6.5.0 loader
2021-09-08T14:29:43.345Z        DEBUG   [dashboards]    dashboards/es_loader.go:329     Elasticsearch URL http://10.2.10.122:9200
2021-09-08T14:29:43.346Z        INFO    kibana/client.go:118    Kibana url: http://10.2.10.121:5601
2021-09-08T14:29:44.472Z        ERROR   instance/beat.go:824    Exiting: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: fail to execute the HTTP GET request: Get http://10.2.10.121:5601/api/status: dial tcp 10.2.10.121:5601: connectex: No connection could be made because the target machine actively refused it.. Response: .
Exiting: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: fail to execute the HTTP GET request: Get http://10.2.10.121:5601/api/status: dial tcp 10.2.10.121:5601: connectex: No connection could be made because the target machine actively refused it.. Response: .
PS C:\Program Files\Filebeat>

also when running:

.\filebeat.exe setup -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["10.2.10.122:9200"]'

Loaded index template
Loading dashboards (Kibana must be running and reachable)
Exiting: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: fail to execute the HTTP GET request: Get http://10.2.10.121:5601/api/status: dial tcp 10.2.10.121:5601: connectex: No connection could be made because the target machine actively refused it.. Response: .

Kibana is behind an nginx proxy so that when i access to kibana's URL (http://10.2.10.121:80) is asking for an username and password. This can be the issue here? if so how can I add the username and password into the yml file.

What version of Filebeat?

Yes setup will need direct access to Kibana API if behind nginx you will need to allow authentication and access to the api endpoints. I can not help you with nginx

1 Like

Filebeat 6.5.0

ok i've taken out the nginx proxy and allowed every ip on my kibana.yml file.

after that i run again the command and it is working to create the index:

.\filebeat.exe setup -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["10.2.10.122:9200"]'

Loaded index template
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Loaded machine learning job configurations
PS C:\Program Files\Filebeat>

However, i run the command:

.\filebeat.exe setup -e

and same issue (i modified again the yml file to comment the elasticsearch output and kibana info, only leave the logstash output like at the beginning)

2021-09-08T15:04:59.886Z        ERROR   instance/beat.go:824    Exiting: Template loading requested but the Elasticsearch output is not configured/enabled
Exiting: Template loading requested but the Elasticsearch output is not configured/enabled

Any idea about this?

After you successfully run setup and configured to point to logstash

To just run filebeat in normal mode take out the setup and run this.

.\filebeat.exe -e

1 Like

Hi stephenb,

it seems that the command is running fine now. I had to changet the path for the modules as in Windows the path changes a bit from linux inside filebeat.yml file.

PS C:\Program Files\Filebeat> .\filebeat.exe -e
2021-09-09T07:53:17.139Z        INFO    instance/beat.go:616    Home path: [C:\Program Files\Filebeat] Config path: [C:\Program Files\Filebeat] Data path: [C:\Program Files\Filebeat\data] Logs path: [C:\Program Files\Filebeat\logs]
2021-09-09T07:53:17.210Z        DEBUG   [beat]  instance/beat.go:653    Beat metadata path: C:\Program Files\Filebeat\data\meta.json
2021-09-09T07:53:17.212Z        INFO    instance/beat.go:623    Beat UUID: a340f7e6-8e32-4e44-89ab-067f5c7ef0c2
2021-09-09T07:53:17.212Z        DEBUG   [seccomp]       seccomp/seccomp.go:88   Syscall filtering is only supported on Linux
2021-09-09T07:53:17.213Z        INFO    [beat]  instance/beat.go:849    Beat info       {"system_info": {"beat": {"path": {"config": "C:\\Program Files\\Filebeat", "data": "C:\\Program Files\\Filebeat\\data", "home": "C:\\Program Files\\Filebeat", "logs": "C:\\Program Files\\Filebeat\\logs"}, "type": "filebeat", "uuid": "a340f7e6-8e32-4e44-89ab-067f5c7ef0c2"}}}
2021-09-09T07:53:17.213Z        INFO    [beat]  instance/beat.go:858    Build info      {"system_info": {"build": {"commit": "ff5b9b3db49856a25b5eda133b6997f2157a4910", "libbeat": "6.5.0", "time": "2018-11-09T17:54:46.000Z", "version": "6.5.0"}}}
2021-09-09T07:53:17.214Z        INFO    [beat]  instance/beat.go:861    Go runtime info {"system_info": {"go": {"os":"windows","arch":"amd64","max_procs":4,"version":"go1.10.3"}}}
2021-09-09T07:53:17.229Z        INFO    [beat]  instance/beat.go:865    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-07-22T10:18:43.9Z","name":"S-AZ-DP-DV1-01","ip":["fe80::1016:724d:3ec0:5990/64","10.200.114.7/23","::1/128","127.0.0.1/8"],"kernel_version":"10.0.17763.2061 (WinBuild.160101.0800)","mac":["00:0d:3a:c6:16:e5"],"os":{"family":"windows","platform":"windows","name":"Windows Server 2019 Datacenter","version":"10.0","major":10,"minor":0,"patch":0,"build":"17763.2061"},"timezone":"GMT","timezone_offset_sec":0,"id":"aa806102-4200-4cab-98a0-c146fd518233"}}}
2021-09-09T07:53:17.237Z        INFO    [beat]  instance/beat.go:894    Process info    {"system_info": {"process": {"cwd": "C:\\Program Files\\Filebeat", "exe": "C:\\Program Files\\Filebeat\\filebeat.exe", "name": "filebeat.exe", "pid": 9068, "ppid": 4308, "start_time": "2021-09-09T07:53:17.038Z"}}}
2021-09-09T07:53:17.237Z        INFO    instance/beat.go:302    Setup Beat: filebeat; Version: 6.5.0
2021-09-09T07:53:17.238Z        DEBUG   [beat]  instance/beat.go:323    Initializing output plugins
2021-09-09T07:53:17.248Z        DEBUG   [filters]       add_cloud_metadata/add_cloud_metadata.go:160    add_cloud_metadata: starting to fetch metadata, timeout=3s
2021-09-09T07:53:17.260Z        DEBUG   [filters]       add_cloud_metadata/add_cloud_metadata.go:192    add_cloud_metadata: received disposition for openstack after 11.9991ms. result=[provider:openstack, error=failed with http status code 404, metadata={}]
2021-09-09T07:53:17.260Z        DEBUG   [filters]       add_cloud_metadata/add_cloud_metadata.go:192    add_cloud_metadata: received disposition for gce after 11.9991ms. result=[provider:gce, error=failed with http status code 404, metadata={}]
2021-09-09T07:53:17.261Z        DEBUG   [filters]       add_cloud_metadata/add_cloud_metadata.go:192    add_cloud_metadata: received disposition for ec2 after 13.001ms. result=[provider:ec2, error=failed with http status code 404, metadata={}]
2021-09-09T07:53:17.264Z        DEBUG   [filters]       add_cloud_metadata/add_cloud_metadata.go:192    add_cloud_metadata: received disposition for digitalocean after 16.0018ms. result=[provider:digitalocean, error=failed with http status code 400, metadata={}]
2021-09-09T07:53:17.275Z        DEBUG   [filters]       add_cloud_metadata/add_cloud_metadata.go:192    add_cloud_metadata: received disposition for az after 26.9979ms. result=[provider:az, error=<nil>, metadata={"instance_id":"7c997f8c-e76a-41cd-a2d4-a7629a7bb3af","instance_name":"S-AZ-DP-DV1-01","machine_type":"Standard_F4s_v2","provider":"az","region":"westus2"}]
2021-09-09T07:53:17.275Z        DEBUG   [filters]       add_cloud_metadata/add_cloud_metadata.go:163    add_cloud_metadata: fetchMetadata ran for 26.9979ms
2021-09-09T07:53:17.276Z        INFO    add_cloud_metadata/add_cloud_metadata.go:323    add_cloud_metadata: hosting provider type detected as az, metadata={"instance_id":"7c997f8c-e76a-41cd-a2d4-a7629a7bb3af","instance_name":"S-AZ-DP-DV1-01","machine_type":"Standard_F4s_v2","provider":"az","region":"westus2"}
2021-09-09T07:53:17.277Z        DEBUG   [processors]    processors/processor.go:66      Processors: add_host_metadata=[netinfo.enabled=[false]], add_cloud_metadata={"instance_id":"7c997f8c-e76a-41cd-a2d4-a7629a7bb3af","instance_name":"S-AZ-DP-DV1-01","machine_type":"Standard_F4s_v2","provider":"az","region":"westus2"}
2021-09-09T07:53:17.277Z        DEBUG   [publish]       pipeline/consumer.go:137        start pipeline event consumer
2021-09-09T07:53:17.278Z        INFO    [publisher]     pipeline/module.go:110  Beat name: S-AZ-DP-DV1-01
2021-09-09T07:53:17.279Z        INFO    instance/beat.go:424    filebeat start running.
2021-09-09T07:53:17.279Z        INFO    [monitoring]    log/log.go:117  Starting metrics logging every 30s
2021-09-09T07:53:17.279Z        DEBUG   [service]       service/service_windows.go:68   Windows is interactive: true
2021-09-09T07:53:17.280Z        DEBUG   [registrar]     registrar/registrar.go:114      Registry file set to: C:\Program Files\Filebeat\data\registry
2021-09-09T07:53:17.281Z        INFO    registrar/registrar.go:134      Loading registrar data from C:\Program Files\Filebeat\data\registry
2021-09-09T07:53:17.283Z        INFO    registrar/registrar.go:141      States Loaded from registrar: 0
2021-09-09T07:53:17.284Z        DEBUG   [registrar]     registrar/registrar.go:267      Starting Registrar
2021-09-09T07:53:17.284Z        WARN    beater/filebeat.go:374  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-09-09T07:53:17.285Z        INFO    crawler/crawler.go:72   Loading Inputs: 1
2021-09-09T07:53:17.285Z        DEBUG   [processors]    processors/processor.go:66      Processors:
2021-09-09T07:53:17.286Z        DEBUG   [input] log/config.go:200       recursive glob enabled
2021-09-09T07:53:17.286Z        DEBUG   [input] log/input.go:147        exclude_files: []. Number of stats: 0
2021-09-09T07:53:17.287Z        DEBUG   [input] log/input.go:168        input with previous states loaded: 0
2021-09-09T07:53:17.287Z        INFO    log/input.go:138        Configured paths: [C:\logs\events]
2021-09-09T07:53:17.287Z        INFO    input/input.go:114      Starting input of type: log; ID: 9660679015794530681
2021-09-09T07:53:17.288Z        DEBUG   [input] log/input.go:174        Start next scan
2021-09-09T07:53:17.288Z        DEBUG   [cfgfile]       cfgfile/reload.go:118   Checking module configs from: C:\Program Files\Filebeat\modules.d\
2021-09-09T07:53:17.288Z        DEBUG   [input] log/input.go:195        input states cleaned up. Before: 0, After: 0, Pending: 0
2021-09-09T07:53:17.289Z        DEBUG   [cfgfile]       cfgfile/reload.go:132   Number of module configs found: 0
2021-09-09T07:53:17.290Z        INFO    crawler/crawler.go:106  Loading and starting Inputs completed. Enabled inputs: 1
2021-09-09T07:53:17.290Z        INFO    cfgfile/reload.go:150   Config reloader started
2021-09-09T07:53:17.291Z        DEBUG   [cfgfile]       cfgfile/reload.go:176   Scan for new config files
2021-09-09T07:53:17.291Z        DEBUG   [cfgfile]       cfgfile/reload.go:195   Number of module configs found: 0
2021-09-09T07:53:17.292Z        DEBUG   [reload]        cfgfile/list.go:62      Starting reload procedure, current runners: 0
2021-09-09T07:53:17.294Z        DEBUG   [reload]        cfgfile/list.go:80      Start list: 0, Stop list: 0
2021-09-09T07:53:17.294Z        INFO    cfgfile/reload.go:205   Loading of config files completed.
2021-09-09T07:53:27.289Z        DEBUG   [input] input/input.go:152      Run input
2021-09-09T07:53:30.854Z        DEBUG   [input] log/input.go:174        Start next scan
2021-09-09T07:53:30.858Z        DEBUG   [input] log/input.go:195        input states cleaned up. Before: 0, After: 0, Pending: 0
2021-09-09T07:53:40.859Z        DEBUG   [input] input/input.go:152      Run input
2021-09-09T07:53:40.859Z        DEBUG   [input] log/input.go:174        Start next scan
2021-09-09T07:53:40.864Z        DEBUG   [input] log/input.go:195        input states cleaned up. Before: 0, After: 0, Pending: 0
2021-09-09T07:53:47.367Z        INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":156,"time":{"ms":171}},"total":{"ticks":218,"time":{"ms":264},"value":0},"user":{"ticks":62,"time":{"ms":93}}},"handles":{"open":246},"info":{"ephemeral_id":"1144f910-0276-47cb-bff7-e58321fc8c38","uptime":{"ms":30293}},"memstats":{"gc_next":4194304,"memory_alloc":2677944,"memory_total":4117064,"rss":23953408}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":4}}}}}
2021-09-09T07:53:50.866Z        DEBUG   [input] input/input.go:152      Run input
2021-09-09T07:53:50.866Z        DEBUG   [input] log/input.go:174        Start next scan
2021-09-09T07:53:50.868Z        DEBUG   [input] log/input.go:195        input states cleaned up. Before: 0, After: 0, Pending: 0
2021-09-09T07:54:00.870Z        DEBUG   [input] input/input.go:152      Run input
2021-09-09T07:54:00.870Z        DEBUG   [input] log/input.go:174        Start next scan
2021-09-09T07:54:00.872Z        DEBUG   [input] log/input.go:195        input states cleaned up. Before: 0, After: 0, Pending: 0
2021-09-09T07:54:10.874Z        DEBUG   [input] input/input.go:152      Run input
2021-09-09T07:54:10.874Z        DEBUG   [input] log/input.go:174        Start next scan
2021-09-09T07:54:10.876Z        DEBUG   [input] log/input.go:195        input states cleaned up. Before: 0, After: 0, Pending: 0
2021-09-09T07:54:17.283Z        INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":171},"total":{"ticks":264,"value":264},"user":{"ticks":93}},"handles":{"open":246},"info":{"ephemeral_id":"1144f910-0276-47cb-bff7-e58321fc8c38","uptime":{"ms":60208}},"memstats":{"gc_next":4194304,"memory_alloc":2727416,"memory_total":4166536,"rss":28672}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}}}}}
2021-09-09T07:54:20.878Z        DEBUG   [input] input/input.go:152      Run input
2021-09-09T07:54:20.878Z        DEBUG   [input] log/input.go:174        Start next scan
2021-09-09T07:54:20.880Z        DEBUG   [input] log/input.go:195        input states scan
2021-09-09T07:55:40.908Z        DEBUG   [input] log/input.go:195        input states cleaned up. Before: 0, After: 0, Pending: 0
2021-09-09T07:55:47.284Z        INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":187},"total":{"ticks":296,"value":296},"user":{"ticks":109}},"handles":{"open":247},"info":{"ephemeral_id":"1144f910-0276-47cb-bff7-e58321fc8c38","uptime":{"ms":150208}},"memstats":{"gc_next":4194304,"memory_alloc":1634024,"memory_total":4346128,"rss":-8192}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}}}}}

However in Kibana I cannot see any index like "filebeat" or similar. I suppose that i can see only the index "logstash" from different dates. When I create the index pattern in kibana for example "logstash-2021.05.09" for this one. I cannot retrieve any data there.

Is this because Filebeat in windows is sending logs in binaries to logstash and this one to Elasticsearch in binaries?

Thanks in advance.

They aren't binaries, but yes, it's because they are going via Logstash.

1 Like

Ok, i installed winglogbeat in another server and can see winglogbeat index in Kibana. Theoretically winglogbeat it is sending through logstash, that's ok i think?

Also my second question here is:

Do i have to do the initial setup for every elasticsearch node in the cluster? For now i did just the initial setup for one node in the cluster (specifying the elasticsesrch master node Ip into output in winglogbeat.yml). What happen if for example now the master mode dies and i did not the initial setup for the rest of the nodes? The stack is ELK 6.5.0

No, once.

But if you upgrade, definitely run it again.

1 Like

Ok thanks.

The issue is already resolved.

Regards,

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.