ElasticSearch output of Filebeat is empty, displays http error 400 Bad Request

So installed Elasticsearch (v8.12.0) for Windows as per the guide given here: Elasticsearch Installation Guide.

I similarly, installed Kibana (v8.12.0) for Windows as per the guide given here: Kibana Installation Guide.

I connected the 2 using the enrollment token generated when Elasticsearch is first started and made note of the password, plus the HTTP CA certificate SHA-256 fingerprint. Both Elasticsearch and Kibana run without issue locally on my laptop on localhost ports 9200 and 5601, respectively.

The problem comes up when I tried to install Filebeat (v8.12.0) for Windows and set the output as Elasticsearch.

I followed the guide given here: FileBeat Installation Guide.
I finished steps 1,2 and 3 but I'm having trouble with steps 4 and 5. When I run this line of code from step 4 in PowerShell,

filebeat setup -e

the Kibana console window shows this error:

After a few minutes, it stops showing the error and I move on to step 5. Once I do step 5, I open Kibana on localhost - port 5601 and on the Discover landing page, I see all the fields for FileBeat, but they are all empty, i.e., no log data from Filebeat has loaded in.

I can't tell if the log data is being sent or not, and if it is I don't know why its not showing up on Kibana. If anyone can help me with this, I'd really appreciate it. I apologize in advance, if I've missed anything, as this is my first time working with and installing the ELK stack software.

Here is the Elasticsearch-output section of my filebeat.yml

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://localhost:9200"]

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  preset: balanced

  # Protocol - either `http` (default) or `https`.
  protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "my-password"
  ssl:
    enabled: true 
    ca_trusted_fingerprint: "sha 256 fingerprint here"

Can you share the full log from Filebeat and the full log from kibana when this happens?

Certainly.
Here are the logs before I run,

 .\filebeat.exe setup -e

Kibana logs:

[2024-02-02T20:18:18.639+05:30][INFO ][status] Kibana is now available (was degraded)
[2024-02-02T20:18:43.621+05:30][INFO ][plugins.security.routes] Logging in with provider "basic" (basic)

Filebeat logs:

{"log.level":"info","@timestamp":"2024-02-02T20:23:22.626+0530","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":811},"message":"Home path: [C:\\Program Files\\Filebeat] Config path: [C:\\Program Files\\Filebeat] Data path: [C:\\Program Files\\Filebeat\\data] Logs path: [C:\\Program Files\\Filebeat\\logs]","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-02T20:23:22.626+0530","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":819},"message":"Beat ID: 06ea7489-c448-456c-bc73-cf3185b9d146","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-02T20:23:22.648+0530","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.logSystemInfo","file.name":"instance/beat.go","file.line":1337},"message":"Beat info","service.name":"filebeat","system_info":{"beat":{"path":{"config":"C:\\Program Files\\Filebeat","data":"C:\\Program Files\\Filebeat\\data","home":"C:\\Program Files\\Filebeat","logs":"C:\\Program Files\\Filebeat\\logs"},"type":"filebeat","uuid":"06ea7489-c448-456c-bc73-cf3185b9d146"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-02-02T20:23:22.648+0530","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.logSystemInfo","file.name":"instance/beat.go","file.line":1346},"message":"Build info","service.name":"filebeat","system_info":{"build":{"commit":"27c592782c25906c968a41f0a6d8b1955790c8c5","libbeat":"8.12.0","time":"2024-01-10T21:05:10.000Z","version":"8.12.0"},"ecs.version":"1.6.0"}}
...
{"log.level":"info","@timestamp":"2024-02-02T20:23:24.383+0530","log.logger":"kibana","log.origin":{"function":"github.com/elastic/elastic-agent-libs/kibana.NewClientWithConfigDefault","file.name":"kibana/client.go","file.line":182},"message":"Kibana url: http://localhost:5601","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-02T20:23:25.649+0530","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).init.func1","file.name":"add_cloud_metadata/add_cloud_metadata.go","file.line":100},"message":"add_cloud_metadata: hosting provider type not detected.","service.name":"filebeat","ecs.version":"1.6.0"}

Here's the logs after the error message shows up:

Kibana logs:

[2024-02-02T20:25:07.212+05:30][ERROR][http] 400 Bad Request
[2024-02-02T20:33:16.831+05:30][INFO ][plugins.fleet] Fleet Usage: {"agents_enabled":true,"agents":{"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"inactive":0,"unenrolled":0,"total_all_statuses":0,"updating":0},"fleet_server":{"total_all_statuses":0,"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"updating":0,"num_host_urls":0}}

Filebeat logs:

{"log.level":"info","@timestamp":"2024-02-02T20:25:08.401+0530","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).loadDashboards","file.name":"instance/beat.go","file.line":1024},"message":"Kibana dashboards successfully loaded.","service.name":"filebeat","ecs.version":"1.6.0"}
Loaded dashboards
{"log.level":"info","@timestamp":"2024-02-02T20:25:08.402+0530","log.logger":"esclientleg","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/esleg/eslegclient.NewConnection","file.name":"eslegclient/connection.go","file.line":122},"message":"elasticsearch url: https://localhost:9200","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-02T20:25:08.435+0530","log.logger":"tls","log.origin":{"function":"github.com/elastic/elastic-agent-libs/transport/tlscommon.trustRootCA","file.name":"tlscommon/tls_config.go","file.line":179},"message":"'ca_trusted_fingerprint' set, looking for matching fingerprints","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-02T20:25:08.435+0530","log.logger":"tls","log.origin":{"function":"github.com/elastic/elastic-agent-libs/transport/tlscommon.trustRootCA","file.name":"tlscommon/tls_config.go","file.line":199},"message":"CA certificate matching 'ca_trusted_fingerprint' found, adding it to 'certificate_authorities'","service.name":"filebeat","ecs.version":"1.6.0"}
...
{"log.level":"info","@timestamp":"2024-02-02T20:25:15.515+0530","log.logger":"modules","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/fileset.LoadPipeline","file.name":"fileset/pipelines.go","file.line":135},"message":"Elasticsearch pipeline loaded.","service.name":"filebeat","pipeline":"filebeat-8.12.0-mongodb-log-pipeline-plaintext","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-02T20:25:15.516+0530","log.logger":"modules","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/fileset.LoadPipeline","file.name":"fileset/pipelines.go","file.line":135},"message":"Elasticsearch pipeline loaded.","service.name":"filebeat","pipeline":"filebeat-8.12.0-mongodb-log-pipeline-json","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-02-02T20:25:15.516+0530","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cfgfile.(*Reloader).Load","file.name":"cfgfile/reload.go","file.line":255},"message":"Error loading configuration files: 1 error: Unable to hash given config: missing field accessing '0.audit' (source:'C:\\Program Files\\Filebeat\\modules.d\\gcp.yml.disabled')","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-02T20:25:15.516+0530","log.logger":"load","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cfgfile.(*RunnerList).Stop","file.name":"cfgfile/list.go","file.line":188},"message":"Stopping 68 runners ...","service.name":"filebeat","ecs.version":"1.6.0"}
Loaded Ingest pipelines

Hope this helps!

1 Like

Does the kibana log just say bad request or is there more info on those log lines?

Is kibana working otherwise? Can you make a visualization, save a search, etc?

The only info other than the error messages is the normal bootup logs for Kibana:

[2024-02-02T21:20:51.628+05:30][INFO ][root] Kibana is starting
[2024-02-02T21:20:51.698+05:30][INFO ][node] Kibana process configured with roles: [background_tasks, ui]
[2024-02-02T21:21:49.519+05:30][INFO ][plugins-service] Plugin "cloudChat" is disabled.
[2024-02-02T21:21:49.527+05:30][INFO ][plugins-service] Plugin "cloudExperiments" is disabled.
[2024-02-02T21:21:49.528+05:30][INFO ][plugins-service] Plugin "cloudFullStory" is disabled.
[2024-02-02T21:21:49.528+05:30][INFO ][plugins-service] Plugin "cloudGainsight" is disabled.
[2024-02-02T21:21:49.854+05:30][INFO ][plugins-service] Plugin "profilingDataAccess" is disabled.
[2024-02-02T21:21:49.855+05:30][INFO ][plugins-service] Plugin "profiling" is disabled.
[2024-02-02T21:21:49.936+05:30][INFO ][plugins-service] Plugin "securitySolutionServerless" is disabled.
[2024-02-02T21:21:49.937+05:30][INFO ][plugins-service] Plugin "serverless" is disabled.
[2024-02-02T21:21:49.938+05:30][INFO ][plugins-service] Plugin "serverlessObservability" is disabled.
[2024-02-02T21:21:49.939+05:30][INFO ][plugins-service] Plugin "serverlessSearch" is disabled.
[2024-02-02T21:21:50.383+05:30][INFO ][http.server.Preboot] http server running at http://localhost:5601

And yes, Kibana works fine otherwise. I can create visualizations and save searches.

1 Like

Sorry -- the log lines that say bad request on them, is that really the full log line? Or is there more info about the request in the log line?

Not sure if I add any value, but I think I experience the same issue.
I try to set up filebeat using filebeat.exe setup --dashboards.
And Kibana outputs a lot of [ERROR][http] 400 Bad Request errors while the filebeat command executes.
I have disabled Elasticsearch security.

I set up a proxy between filebeat and Kibana to find out what was going on.
It's a GET request to http://localhost:5601/api/saved_objects/_find that returns status 400, with the response:

{
    "error": "Bad Request",
    "message": "[request query.type]: expected at least one defined value but got [undefined]",
    "statusCode": 400
}

Not sure if it has any bad consequence.

1 Like

Unfortunately, there is no other info about the request in the log line. It shows just the timestamp and the error message.

1 Like

Can you please share the rest of your filebeat config?

I think I know what is causing these errors but they wouldn't explain why you see no data in Elasticsearch so I think it makes sense to proceed with troubleshooting your inputs.

I've created an issue for the error messages in the meantime.

Absolutely. Here's the full filebeat.yml file,

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ['https://myESHost:9200']

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  preset: balanced

  # Protocol - either `http` (default) or `https`.
  protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "my password here"
  ssl:
    enabled: true
    ca_trusted_fingerprint: "my SHA fingerprint here"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true


Your input is disabled, can you toggle this to true?

I enabled and ran it again. All the fields are still empty.

Can you check the index list in stack monitoring and verify that 1) the filebeat indices exists and 2) that they show as having zero documents?

Can you share a screenshot of this in stack monitoring if possible?
Can you also share the full filebeat config and the full filebeat log from the latest run?

Sure, here is the image:

I have expanded the time range, but it doesn't help.

I'm sorry, but I don't understand what is meant by the full filebeat config. I have already sent filebeat.yml, and I haven't edited any other files .

After running .\filebeat setup - e the full filebeat log is too long, so I tried to add the important logs in my third reply. As far as I can tell the other logs are more or less repetitions of the ones in that reply.

Thanks!

Can you run filebeat run -e and share the log?

Hi! Sorry for the late reply. Here are the logs:

{"log.level":"info","@timestamp":"2024-02-12T10:32:46.989+0530","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":811},"message":"Home path: [C:\\Program Files\\Filebeat] Config path: [C:\\Program Files\\Filebeat] Data path: [C:\\Program Files\\Filebeat\\data] Logs path: [C:\\Program Files\\Filebeat\\logs]","service.name":"filebeat","ecs.version":"1.6.0"}
"log.level":"info","@timestamp":"2024-02-12T10:32:47.038+0530","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.logSystemInfo","file.name":"instance/beat.go","file.line":1346},"message":"Build
info","service.name":"filebeat","system_info":{"build":{"commit":"27c592782c25906c968a41f0a6d8b1955790c8c5","libbeat":"8.12.0","time":"2024-01-10T21:05:10.000Z","version":"8.12.0"},"ecs.version":"1.6.0"}}
{"log.level":"error","@timestamp":"2024-02-12T10:32:47.039+0530","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).fetchMetadata","file.name":"add_cloud_metadata/providers.go","file.line":173},"message":"add_cloud_metadata: received error failed requesting openstack metadata: Get \"https://169.254.169.254/2009-04-04/meta-data/instance-id\": dial tcp 169.254.169.254:443: connectex: A socket operation was attempted to an unreachable network.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-12T10:32:47.056+0530","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.logSystemInfo","file.name":"instance/beat.go","file.line":1384},"message":"Process info","service.name":"filebeat","system_info":{"process":{"cwd":"C:\\Program Files\\Filebeat","exe":"C:\\Program Files\\Filebeat\\filebeat.exe","name":"filebeat.exe","pid":29272,"ppid":27928,"start_time":"2024-02-12T10:32:46.904+0530"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-02-12T10:32:47.056+0530","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).createBeater","file.name":"instance/beat.go","file.line":334},"message":"Setup Beat: filebeat; Version: 8.12.0","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-12T10:32:47.089+0530","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.makeES","file.name":"elasticsearch/elasticsearch.go","file.line":63},"message":"Applying performance preset 'balanced': {\n  \"bulk_max_size\": 1600,\n  \"compression_level\": 1,\n  \"idle_connection_timeout\": \"3s\",\n  \"queue\": {\n    \"mem\": {\n      \"events\": 3200,\n      \"flush\": {\n        \"min_events\": 1600,\n        \"timeout\": \"10s\"\n      }\n    }\n  },\n  \"worker\": 1\n}","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-12T10:32:47.090+0530","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).launch","file.name":"instance/beat.go","file.line":520},"message":"filebeat start running.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-12T10:32:47.090+0530","log.logger":"monitoring","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/monitoring/report/log.(*reporter).snapshotLoop","file.name":"log/log.go","file.line":145},"message":"Starting metrics logging every 30s","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-12T10:32:47.095+0530","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/statestore/backend/memlog.openStore","file.name":"memlog/store.go","file.line":134},"message":"Finished loading transaction log file for 'C:\\Program Files\\Filebeat\\data\\registry\\filebeat'. Active transaction id=0","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-12T10:32:47.099+0530","log.logger":"crawler","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/beater.(*crawler).startInput","file.name":"beater/crawler.go","file.line":148},"message":"Starting input (ID: 11337388005444501392)","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-12T10:32:47.099+0530","log.logger":"input.filestream","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/v2/compat.(*runner).Start.func1","file.name":"compat/compat.go","file.line":121},"message":"Input 'filestream' starting","service.name":"filebeat","id":"my-filestream-id","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-12T10:32:47.111+0530","log.logger":"input","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/log.NewInput","file.name":"log/input.go","file.line":174},"message":"Configured paths: [C:\\var\\log\\nginx\\access.log*]","service.name":"filebeat","input_id":"bc0e119a-66d0-47f7-993d-be13c506dd6e","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-12T10:33:47.093+0530","log.logger":"monitoring","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/monitoring/report/log.(*reporter).logSnapshot","file.name":"log/log.go","file.line":187},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":31},"total":{"ticks":62,"value":62},"user":{"ticks":31}},"info":{"ephemeral_id":"067e8f89-af4b-4733-8b61-53a193f1ceb1","uptime":{"ms":60161},"version":"8.12.0"},"memstats":{"gc_next":35821104,"memory_alloc":22853016,"memory_total":63349760,"rss":78270464},"runtime":{"goroutines":36}},"filebeat":{"events":{"active":0},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":1}},"output":{"events":{"active":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"handles":{"open":-1}}},"ecs.version":"1.6.0"}}

I assumed there was an issue with filebeat trying to collect the logs so I changed the paths in my nginx.yml file to a custom set path, but that did not help.

It looks like it doesn't see any files to monitor, can you run Filebeat with filebeat -e -d "*" and share the output?

Sure. Here are the message fields of the output logs:

message : filebeat start running.   
message : Windows is interactive: true   
message : isFile(C:\\Program Files\\Filebeat\\data\\registry) -> false   
message : isFile() -> false   
message : isDir(C:\\Program Files\\Filebeat\\data\\registry\\filebeat) -> true   
message : isFile(C:\\Program Files\\Filebeat\\data\\registry\\filebeat\\meta.json) -> true   
message : Registry type '1' found   
message : Finished loading transaction log file for 'C:\\Program Files\\Filebeat\\data\\registry\\filebeat'. Active transaction id=0   
message : Finished loading transaction log file for 'C:\\Program Files\\Filebeat\\data\\registry\\filebeat'. Active transaction id=0   
message : creating new InputManager   
message : States Loaded from registrar: 0   
message : Starting Registrar   
message : Loading Inputs: 1   
message : starting input keys present on the config: [filebeat.inputs.0.enabled filebeat.inputs.0.id filebeat.inputs.0.paths.0 filebeat.inputs.0.type]   
message : recursive glob enabled   
message : file identity is set to native   
message : Starting input (ID: 11337388005444501392)   
message : Checking module configs from: C:\\Program Files\\Filebeat/modules.d/*.yml   
message : Input 'filestream' starting   
message : Load config from file: C:\\Program Files\\Filebeat\\modules.d\\nginx.yml   
message : Number of module configs found: 1   
message : registering   
message : Enabled modules/filesets: nginx (access)   
message : Starting prospector   
message : Start next scan   
message : File scan complete   
message : DEPRECATED: Log input. Use Filestream input instead.   
message : recursive glob enabled   
message : exclude_files: [(?-s:.)gz(?-m:$)]. Number of states: 0   
message : input with previous states loaded: 0   
message : Configured paths: [C:\\Program Files\\nginx-1.25.3\\logs]   
message : Loading and starting Inputs completed. Enabled inputs: 1   
message : Config reloader started   
message : Scan for new config files   
message : Load config from file: C:\\Program Files\\Filebeat\\modules.d\\nginx.yml   
message : Number of module configs found: 1   
message : Starting reload procedure current runners: 0   
message : Start list: 1 Stop list: 0   
message : Enabled modules/filesets: nginx (access)   
message : recursive glob enabled   
message : Generated new processors: add_locale=[format=offset] add_fields={\ ecs\ :{\ version\ :\ 1.12.0\ }}   
message : exclude_files: [(?-s:.)gz(?-m:$)]. Number of states: 0   
message : input with previous states loaded: 0   
message : Configured paths: [C:\\Program Files\\nginx-1.25.3\\logs]   
message : Starting runner: nginx (access)
message : Pipeline already exists in Elasticsearch.   
message : Loading of config files completed.   
message : Start next scan   
message : Skipping directory: C:\\Program Files\\nginx-1.25.3\\logs   
message : input states cleaned up. Before: 0 After: 0 Pending: 0   
message : Start next scan   
message : File scan complete
message : Error reading from connection: read tcp myESHost:64282->myESHost:9200: use of closed network connection   
message : Run input   
message : Start next scan   
message : Skipping directory: C:\\Program Files\\nginx-1.25.3\\logs   
message : input states cleaned up. Before: 0 After: 0 Pending: 0   
message : Start next scan   
message : File scan complete

Can you share a screenshot of explorer showing the files that exist in that nginx logs folder?

It looks like Filebeat is either 1) not seeing any files in that directory or 2) believes it has already ingested the files

Certainly. Here's the screenshot: