FIlebeat Output Configuration Error

HI All. I am using Elastic/Kibana/Filebeat version 8.15.1. I am setting up the filebeat modules.d\system.yml file and trying to run it to ingest logs in a directory, but I am getting an error:

"Exiting: no outputs are defined, please define one under the output section","service.name":"filebeat","ecs.version":"1.6.0"}
Exiting: no outputs are defined, please define one under the output section"

Below are the outputs of both the filebeat.yml and system.yml file.

Can someone please tell me what is not correctly configured ?

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
- setup.kibana: 
  host: "https://localhost:5601"


  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
- output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://localhost:9200", "https://127.0.0.1:9200"]
   
  

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  #preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "changeme"

Here is the output section of the system.yml file:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
- setup.kibana: 
   host: https://localhost:5601
    

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
- output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://localhost:9200", "https://127.0.0.1:9200"]
  username: "elastic"
  password: "changeme"

Hi @dfir

bad syntax there is no - not sure where you got your original filebeat.yml

These are top level yaml items

should be

setup.kibana: 
  host: "https://localhost:5601"
....
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://localhost:9200", "https://127.0.0.1:9200"]
  username: "elastic"
  password: "changeme"

When I try it like that i get an error also.

Exiting: error loading config file: yaml: line 37: did not find expected '-' indicator

The above error shows up when running:

filebeat.exe -e -c C:\Program Files\Filebeat\modules.d\system.yml

Im trying to run it now just filebeat.exe -e -c C:\Program Files\Filebeat\Filebeat.yml

Hi @dfir

You'll have to post your entire filebeat.yaml.... and entire system.yml

The second file you posted is not the system.yml not sure where you got it or if that is just a cut and past error...

All the active .yml s get concatenated together so they all have to be correct.

yml requires being careful,

the modules.d/system.yml looks like this

# Module: system
# Docs: https://www.elastic.co/guide/en/beats/filebeat/main/filebeat-module-system.html

- module: system
  # Syslog
  syslog:
    enabled: false

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

  # Authorization logs
  auth:
    enabled: false

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

BTW you also need to run setup... I think you need to start over an follow the quick start guide...

and instead of ngnix you want system..

I followed the quick start guide a few times already. I always get stuck at the same place. I already ran the setup.

Here is the system.yml

# Module: system
# Docs: https://www.elastic.co/guide/en/beats/filebeat/main/filebeat-module-system.html

- module: system
  # Syslog
  syslog:
      enabled: true 
      var.paths: ["D:/01-evidence/123/log/syslog*", "D:/01-evidence/ABZ3542/log/dmesg*"]
  auth:
      enabled: true 
      var.paths: ["D:/01-evidence/123/log/auth.log*"]
  btmp: 
      enabled: true 
      var.paths: ["D:/01-evidence/123/log/btmp*"]
  utmp: 
      enabled: true 
      var.paths: ["D:/01-evidence/123/log/utmp*"]
  wtmp: 
      enabled: true 
      var.paths: ["D:/01-evidence/123/log/wtmp*"]      
- module: auditd
  log:
      enabled: true 
      var.paths: ["D:/01-evidence/123/log/audit/audit.log*"]   
- module: postgresql
  log:
      enabled: true 
      var.paths: ["D:/01-evidence/123/log/postgresql/postgresql*.log"] 


    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana: 
  host: "https://localhost:5601"
    

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://localhost:9200", "https://127.0.0.1:9200"]

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "change"
  ssl.verification_mode: none

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

Here is the filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: false 

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - D:\01-evidence\123\log\*.log

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  filebeat.modules:
  - module: system
    syslog:
      enabled: true 
      path: ["D:/01-evidence/123/log/syslog*", "D:/01-evidence/123/log/dmesg*"]
    auth:
      enabled: true 
      path: ["D:/01-evidence/123/log/auth.log*"]
    btmp: 
      enabled: true 
      path: ["D:/01-evidence/123/log/btmp*"]
    utmp: 
      enabled: true 
      path: ["D:/01-evidence/123/log/utmp*"]
    wtmp: 
      enabled: true 
      path: ["D:/01-evidence/123/log/wtmp*"]      
  - module: auditd
    log:
      enabled: true 
      path: ["D:/01-evidence/123/log/audit/audit.log*"]   
  - module: postgresql
    log:
      enabled: true 
      path: ["D:/01-evidence/123/log/postgresql/postgresql*.log"] 

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 10
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana: 
  host: "https://localhost:5601"


  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://localhost:9200", "https://127.0.0.1:9200"]
   
  

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  #preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "change"
  ssl.verification_mode: none

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true


and my nginx yml file is not enabled at all. I only enabled system, not NGINX.

so looking at the system.yml you posted evertything is enabled : False. so

where does the system.yml modules get enabled? in the filebeat.yml or the system.yml?

Step 1 of the guide: Unzip, Rename and run the install service powershell. done exactly like that.

Step 2 of the guide: the output.elasticsearch: I get errors when mine is setup like that. the CA_Trusted_fingerprint causes an error. I need to turn ssl verification to none in order to get no errors.

Step 3: i enabled the system module only, not the NGINX

Step 3 also says to enable at least 1 fileset in the modules.d file, which I believe I have done in the system.yml above.

I have ran the setup -e which sets up dashboards and adds a bunch of columns into discover

I start the filebeat service and nothing gets loaded into Elastic.

seems like my issue is YML config, but that does not even make sense because when YML is incorrect I get a LOT of errors.

I was just showing you an example of the system.yml Yes, you need to enable it

You would follow the same pattern with system as you would with nginx. These are generic documents but the pattern applies.

But more importantly why are you putting the entire filebeat
yml back inside the system.yml? That's not how it works? There is no instructions to put the rest of the filebeat.yml In the system.yml ...
Are you following something else?

Also, where are you getting these other variables?

The entire system module should look something like this with your paths in it

- module: system
  syslog:
    enabled: true
    var.paths: ["/path/to/log/syslog*"]
  auth:
    enabled: true
    var.paths: ["/path/to/log/auth.log*"]

My suggestion is to just turn on one like the syslog in the system.yml

Do not put anything else in there.
Make sure your syntax is correct
I'm not sure exactly what you're trying to accomplish ... Or what instructions you're following nothing in the quick start tells you to add the filebeat.yml back to the system module or add a bunch of additional variables

Also, how are you starting filebeat? How did you install it?

If you're struggling like this, I would suggest using the zip and just and just starting filebeat in the foreground (not service) so you can watch the debug logs.

You can add -d "*" at the end of the command line and it will create a tremendous amount of debug logs and you can look through them

What errors do you get when think the yaml is correct?

Here is what I would do...

I would start over from scratch

Uninstall everything and reinstall it

I would not enable any modules

I would then run the test config and test output commands on filebeat to make sure filebeat.yml works The config is correct and it can connect to elasticsearch.

See command here

Then in the filebeat.yml I would enable that top filestream and just put a log file somewhere and have filebeat read it... It will only read it once, so you'll need to add more lines if you want to see more.

Once you get all that working then I would try the system module... just enable syslog and put your paths, don't move it, don't add anything else to it like the file you showed above... (seems like you are reading somewhere that you can combine them... which you can if you are advanced user but I would get the basics working first)

That would be my suggestion

There are also troubleshooting and debug help in the docs.

i will give it a shot and let you know how it goes.

Thanks

When filebeat is running, would the window be scrolling like logstash?

I made the changes to the system.yml.

added the correct path to the filebeat.yml

and i ran the test config and test output all oks.

now im running filebeat run -e. Its not giving me an error, but its also not scrolling either.

Here are my filebeat.yml and system.yml.

Ok so a few issues.

I am trying to ingest standard auth.log(s) and syslogs from a linux system.

see in filebeat discover tab that there are things ingested, however the entire contents are in the message column and the @timestamp is the ingest time, not the time from the file.

What needs to happen so the time is correct and data from the file ingests into the correct columns instead of all in 1 column?

Also the Filebeat Dashboards were imported, but they are all empty even though discover has events.

Here are my filebeat.yml and system.yml. I am not sure why its ingesting like this.

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: true 

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - "/D:/01-evidence/122/logs*"

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  filebeat.modules:
  - module: system
    syslog:
      enabled: false 
      var.paths: ["/D:/01-evidence/122/logs*"]
  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 10
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana: 
  host: "https://localhost:5601"


  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://localhost:9200", "https://127.0.0.1:9200"]
   
  

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  #preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "change"
  ssl.verification_mode: none

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

# Module: system
# Docs: https://www.elastic.co/guide/en/beats/filebeat/main/filebeat-module-system.html

- module: system
  # Syslog
  syslog:
    enabled: true
    ["/D:/01-evidence/122/logs*"]

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

  # Authorization logs
  auth:
    enabled: true
    ["/D:/01-evidence/122/logs*"]

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:```

I am confused you are saying linux logs

but you are giving... windows paths...

Take a look here...

Definitely not correct ^^^

see here

S I have linux logs in a zip directory that were extracted from a compromised linux system.

I have a windows machine that is my forensics machine where I am trying to ingest these logs that I have copied to that 01-evidence directory.

SO.. when I am using a windows machine with elastic, filebeat, kibana installed and the system logs I am ingesting which (\ / ) is used in my paths?

Probably something like...

['D:\01-evidence\122\logs*']

1 Like

so my path should be like this correct?

['/D:/01-evidence/122/logs*']

Ill give it a shot.

I changed the path, now when I run filebeat I just see "scan for new config files." nothing is being ingested.

Not sure why this is so complicated but I dont think it should be. I must be missing something.

Hey @dfir

It really isnt...

Usually, when I run into somebody struggling it is a combination of

  • Making assumptions...
  • Not readying the docs carefully
  • and / or not following the suggestion and providing the requested results

So I made a number of recommendations ... not sure if you actually followed them

So I have no clue what you have and have not done...

You did not provide any logs... so it is really hard for me to help not sure what to do...

Also, Filebeat will load a file only once ... so if it thinks it has already loaded it... it will not load the file again... unless you clean up the registry data directory... see here

So I am happy to try one more time... so if you follow the steps I gave above and provide all the commands and outputs etc.

Then you can go to Kibana -> Dev Tools

GET _cat/indices/filebeat-*/?v

GET filebeat-*/_search <<< Fixed

perhaps we can get past this

OK here is what I did.

  1. Un-installed the filebeat service.
  2. Deleted the filebeat directories
  3. Re-unzipped Filebeat
  4. Re-named it to Filebeat
  5. placed it in C:\Program Files
  6. Installed the filebeat service via PowerShell
  7. Configured the output.elasticsearch section (see filebeat.yml below)
  8. Skipped enabling System and ran the setup assets
  9. ensured a .log file was in the directory in the path of filebeat.yml
  10. started the filebeat service
  11. Ran the 2 CAT commands. See images. The second one produces an error.
  12. however when I go into Discover in Kibana there are over 1 million documents, but all of the data is in one column called message.
  13. The below are the only logs there.

I am noticing that there is no filebeat index? Should there be?
Would the FileStream read the log and just throw everything into 1 column?
I do see data, but its not clean since it is all in message.

Would you suggest now that data is there, I try the system module?


###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - D:\01-evidence\ABZ3542\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz]

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://localhost:9200"]

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "change"
  ssl:
    enabled: true 
    ca_trusted_fingerprint: "c3e389296dce804f4a21b1ad82780b0e0dd745ea95ef9c17c811a004f5d2c7e4"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

{"log.level":"info","@timestamp":"2024-09-09T20:26:31.458Z","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":828},"message":"Home path: [C:\\Program Files\\Filebeat] Config path: [C:\\Program Files\\Filebeat] Data path: [C:\\Program Files\\Filebeat\\data] Logs path: [C:\\Program Files\\Filebeat\\logs]","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:31.458Z","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":836},"message":"Beat ID: 7bcfa795-e6f1-4e1b-a2c4-b70411ab0df2","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:31.474Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:31.474Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:31.474Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:31.474Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:31.474Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2024-09-09T20:26:31.490Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.fetchRawProviderMetadata","file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":108},"message":"error fetching cluster name metadata: error fetching EC2 Tags: operation error EC2: DescribeTags, get identity: get credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, http response error StatusCode: 404, request to EC2 IMDS failed.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:31.490Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).init.func1","file.name":"add_cloud_metadata/add_cloud_metadata.go","file.line":104},"message":"add_cloud_metadata: hosting provider type detected as aws, metadata={\"cloud\":{\"account\":{\"id\":\"606565331724\"},\"availability_zone\":\"us-east-1b\",\"image\":{\"id\":\"ami-0e6760ea2851c035a\"},\"instance\":{\"id\":\"i-0600759f4d3851ef9\"},\"machine\":{\"type\":\"r5.4xlarge\"},\"provider\":\"aws\",\"region\":\"us-east-1\",\"service\":{\"name\":\"EC2\"}}}","service.name":"filebeat","ecs.version":"1.6.0"}

{"log.level":"info","@timestamp":"2024-09-09T20:26:49.587Z","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":828},"message":"Home path: [C:\\Program Files\\Filebeat] Config path: [C:\\Program Files\\Filebeat] Data path: [C:\\Program Files\\Filebeat\\data] Logs path: [C:\\Program Files\\Filebeat\\logs]","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.595Z","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":836},"message":"Beat ID: 7bcfa795-e6f1-4e1b-a2c4-b70411ab0df2","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.611Z","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.logSystemInfo","file.name":"instance/beat.go","file.line":1385},"message":"Beat info","service.name":"filebeat","system_info":{"beat":{"path":{"config":"C:\\Program Files\\Filebeat","data":"C:\\Program Files\\Filebeat\\data","home":"C:\\Program Files\\Filebeat","logs":"C:\\Program Files\\Filebeat\\logs"},"type":"filebeat","uuid":"7bcfa795-e6f1-4e1b-a2c4-b70411ab0df2"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.611Z","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.logSystemInfo","file.name":"instance/beat.go","file.line":1394},"message":"Build info","service.name":"filebeat","system_info":{"build":{"commit":"88cc526a2d3e52dcbfa52c9dd25eb09ed95470e4","libbeat":"8.15.1","time":"2024-09-02T08:36:21.000Z","version":"8.15.1"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.611Z","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.logSystemInfo","file.name":"instance/beat.go","file.line":1397},"message":"Go runtime info","service.name":"filebeat","system_info":{"go":{"os":"windows","arch":"amd64","max_procs":16,"version":"go1.22.6"},"ecs.version":"1.6.0"}}
{"log.level":"error","@timestamp":"2024-09-09T20:26:49.613Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:49.614Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:49.614Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:49.614Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:49.614Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.615Z","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.logSystemInfo","file.name":"instance/beat.go","file.line":1403},"message":"Host info","service.name":"filebeat","system_info":{"host":{"architecture":"x86_64","native_architecture":"x86_64","boot_time":"2024-09-06T12:39:58Z","name":"ec2amaz-368u98e","ip":["fe80::ac1e:27e5:ff97:130d","10.194.107.122","::1","127.0.0.1"],"kernel_version":"10.0.20348.1249 (WinBuild.160101.0800)","mac":["0e:a3:a8:5b:8e:f5"],"os":{"type":"windows","family":"windows","platform":"windows","name":"Windows Server 2022 Datacenter","version":"10.0","major":10,"minor":0,"patch":0,"build":"20348.1249"},"timezone":"UTC","timezone_offset_sec":0,"id":"44405d94-c710-497b-8476-4d25391fad78"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.615Z","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.logSystemInfo","file.name":"instance/beat.go","file.line":1432},"message":"Process info","service.name":"filebeat","system_info":{"process":{"cwd":"C:\\Program Files\\Filebeat","exe":"C:\\Program Files\\Filebeat\\filebeat.exe","name":"filebeat.exe","pid":5484,"ppid":13424,"start_time":"2024-09-09T20:26:49.479Z"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.615Z","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).createBeater","file.name":"instance/beat.go","file.line":341},"message":"Setup Beat: filebeat; Version: 8.15.1","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2024-09-09T20:26:49.616Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.fetchRawProviderMetadata","file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":108},"message":"error fetching cluster name metadata: error fetching EC2 Tags: operation error EC2: DescribeTags, get identity: get credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, http response error StatusCode: 404, request to EC2 IMDS failed.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.616Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).init.func1","file.name":"add_cloud_metadata/add_cloud_metadata.go","file.line":104},"message":"add_cloud_metadata: hosting provider type detected as aws, metadata={\"cloud\":{\"account\":{\"id\":\"606565331724\"},\"availability_zone\":\"us-east-1b\",\"image\":{\"id\":\"ami-0e6760ea2851c035a\"},\"instance\":{\"id\":\"i-0600759f4d3851ef9\"},\"machine\":{\"type\":\"r5.4xlarge\"},\"provider\":\"aws\",\"region\":\"us-east-1\",\"service\":{\"name\":\"EC2\"}}}","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.625Z","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.makeES","file.name":"elasticsearch/elasticsearch.go","file.line":63},"message":"Applying performance preset 'balanced': {\n  \"bulk_max_size\": 1600,\n  \"compression_level\": 1,\n  \"idle_connection_timeout\": \"3s\",\n  \"queue\": {\n    \"mem\": {\n      \"events\": 3200,\n      \"flush\": {\n        \"min_events\": 1600,\n        \"timeout\": \"10s\"\n      }\n    }\n  },\n  \"worker\": 1\n}","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2024-09-09T20:26:49.625Z","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.makeES","file.name":"elasticsearch/elasticsearch.go","file.line":66},"message":"Performance preset 'balanced' overrides user setting for field 'bulk_max_size'","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.626Z","log.logger":"esclientleg","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/esleg/eslegclient.NewConnection","file.name":"eslegclient/connection.go","file.line":133},"message":"elasticsearch url: https://localhost:9200","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.626Z","log.logger":"publisher","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/publisher/pipeline.LoadWithSettings","file.name":"pipeline/module.go","file.line":105},"message":"Beat name: EC2AMAZ-368U98E","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:49.626Z","log.logger":"modules","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/fileset.newModuleRegistry","file.name":"fileset/modules.go","file.line":136},"message":"Enabled modules/filesets: ","service.name":"filebeat","ecs.version":"1.6.0"}

{"log.level":"info","@timestamp":"2024-09-09T20:26:54.992Z","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":828},"message":"Home path: [C:\\Program Files\\Filebeat] Config path: [C:\\Program Files\\Filebeat] Data path: [C:\\Program Files\\Filebeat\\data] Logs path: [C:\\Program Files\\Filebeat\\logs]","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:54.997Z","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":836},"message":"Beat ID: 7bcfa795-e6f1-4e1b-a2c4-b70411ab0df2","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:55.010Z","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.makeES","file.name":"elasticsearch/elasticsearch.go","file.line":63},"message":"Applying performance preset 'balanced': {\n  \"bulk_max_size\": 1600,\n  \"compression_level\": 1,\n  \"idle_connection_timeout\": \"3s\",\n  \"queue\": {\n    \"mem\": {\n      \"events\": 3200,\n      \"flush\": {\n        \"min_events\": 1600,\n        \"timeout\": \"10s\"\n      }\n    }\n  },\n  \"worker\": 1\n}","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2024-09-09T20:26:55.010Z","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.makeES","file.name":"elasticsearch/elasticsearch.go","file.line":66},"message":"Performance preset 'balanced' overrides user setting for field 'bulk_max_size'","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:55.010Z","log.logger":"esclientleg","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/esleg/eslegclient.NewConnection","file.name":"eslegclient/connection.go","file.line":133},"message":"elasticsearch url: https://localhost:9200","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:55.012Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:55.012Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:55.013Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:55.013Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-09-09T20:26:55.013Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":190},"message":"add_cloud_metadata: received error failed with http status code 401","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2024-09-09T20:26:55.015Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.fetchRawProviderMetadata","file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":108},"message":"error fetching cluster name metadata: error fetching EC2 Tags: operation error EC2: DescribeTags, get identity: get credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, http response error StatusCode: 404, request to EC2 IMDS failed.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:55.015Z","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).init.func1","file.name":"add_cloud_metadata/add_cloud_metadata.go","file.line":104},"message":"add_cloud_metadata: hosting provider type detected as aws, metadata={\"cloud\":{\"account\":{\"id\":\"606565331724\"},\"availability_zone\":\"us-east-1b\",\"image\":{\"id\":\"ami-0e6760ea2851c035a\"},\"instance\":{\"id\":\"i-0600759f4d3851ef9\"},\"machine\":{\"type\":\"r5.4xlarge\"},\"provider\":\"aws\",\"region\":\"us-east-1\",\"service\":{\"name\":\"EC2\"}}}","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:55.028Z","log.logger":"tls","log.origin":{"function":"github.com/elastic/elastic-agent-libs/transport/tlscommon.trustRootCA","file.name":"tlscommon/tls_config.go","file.line":179},"message":"'ca_trusted_fingerprint' set, looking for matching fingerprints","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:55.028Z","log.logger":"tls","log.origin":{"function":"github.com/elastic/elastic-agent-libs/transport/tlscommon.trustRootCA","file.name":"tlscommon/tls_config.go","file.line":199},"message":"CA certificate matching 'ca_trusted_fingerprint' found, adding it to 'certificate_authorities'","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:55.044Z","log.logger":"tls","log.origin":{"function":"github.com/elastic/elastic-agent-libs/transport/tlscommon.trustRootCA","file.name":"tlscommon/tls_config.go","file.line":179},"message":"'ca_trusted_fingerprint' set, looking for matching fingerprints","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:55.044Z","log.logger":"tls","log.origin":{"function":"github.com/elastic/elastic-agent-libs/transport/tlscommon.trustRootCA","file.name":"tlscommon/tls_config.go","file.line":199},"message":"CA certificate matching 'ca_trusted_fingerprint' found, adding it to 'certificate_authorities'","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-09-09T20:26:55.046Z","log.logger":"esclientleg","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/esleg/eslegclient.(*Connection).Ping","file.name":"eslegclient/connection.go","file.line":322},"message":"Attempting to connect to Elasticsearch version 8.15.1 (default)","service.name":"filebeat","ecs.version":"1.6.0"}