Long delay for logs to appear in Discovery tab

Hello, I am using filebeat to ship my haproxy logs to the ELK stack server (192.168.1.3)

Looks like I have tried everything,

Reinstalling ELK stack
Reinstalling filebeat
Restarting servers
Deleting registry, and adding - filebeat.registry_file: ${path.data}/registry3 to filebeat.yml
Adding ignore_older: 5m to haproxy.yml module

Here's my full haproxy.yml -

- module: haproxy
  log:
    enabled: true
    var.paths: ["/var/log/haproxy.log"]
    var.input: "file"
    ignore_older: 5m
    worker: 4
    bulk_max_size: 500
    spooler_size: 4096

And of course the filebeat.yml -

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
   # - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
#     - /var/log/haproxy.log
**filebeat.registry_file: ${path.data}/registry3**


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#cloud.id: "kobra:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRmZWZiMDFhMzdkNDk0MTJlOWM5Y2E2ODlmY2VkNDdjYiQ3NjJhZWYwNjQ5NGQ0YzI5OTQ1NzFmNjE2MTQwOTBlNg=="
#cloud.auth: "elastic:hKavlsPYFsHuqTzt4SgmC9RC"
#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
# hosts: ["192.168.1.3:9300"]
# hosts: ["192.168.1.3:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

output.elasticsearch:
  hosts: ["192.168.1.3:9200"]
  username: "admin"
  password: "test22_"

Here's the result -

Date on both servers are identical.

Really feeling desperate at this point. Please help.

P.s after starting filebeats and calling curl '192.168.1.3:9200/_cat/indices?v' I can see the document count going up.

1 Like

If you click on the "Inspect" button, you can get the exact request made to Elasticsearch. Maybe you could try manually sending that query in dev tools and make sense of why it's returning no documents?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.