Not at all @shaunak. First of all, please excuse me for the delay.
As requested, below is the
#=========================== Filebeat inputs =============================
- type: log
# Change to true to enable this input configuration.
# Paths that should be crawled and fetched. Glob based paths.
#============================= Filebeat modules ===============================
# Glob pattern for configuration loading
# Set to true to enable config reloading
# Period on which files under path should be checked for changes
#==================== Elasticsearch template setting ==========================
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# The tags of the shipper are included in their own field with each
# transaction published.
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using thesetup
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#================================ Outputs =====================================
#-------------------------- Elasticsearch output ------------------------------
# Array of hosts to connect to.
# Enabled ilm (beta) to use index lifecycle management instead daily indices.
# Optional protocol and basic auth credentials.
#----------------------------- Logstash output --------------------------------
# The Logstash hosts
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
- add_host_metadata: ~
- add_cloud_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#================================= Migration ==================================
# This allows to enable 6.7 migration aliases
Yes, I can see the index-pattern demo-001 on Kibana. I would like to mentionned that I am sending events via Logstash (The reason why the lines are commented out, is because I recently tried the
filebeat setup --dashboards command. My index pattern was created manually by me, and not pushed via filebeat.