[Unresolved] Absolutely nothing shows in any [Filebeat] Kibana Dashboards ("No results found")

Host: Debian 9
ELK Stack version: 6.6.0
Sending logs through Elasticsearch or Logstash: Elasticsearch (hopefully Logstash later)

Summary:
I get "No results found :neutral_face:" on every [Filebeat] Dashboard in Kibana. I would appreciate any guidance in getting results to appear. Please give me the exact file path of any logs/files you need me to show you. Thank you!

/etc/filebeat/filebeat.yml (Part 1 of 2):

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation


  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

/etc/filebeat/filebeat.yml (Part 2 of 2) :

#================================ Outputs =====================================


# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

What is the output from the _cat/indices?v API?

I entered cat/indices?v in the >_ query bar while viewing [Filebeat System] Sudo commands.

The first time I ran it, I got this message x3 times:

Error in visualization

Request to Elasticsearch failed: {"error":{"root_cause":[{"type":"query_shard_exception","reason":"Failed to parse query [cat/indices?v]","index_uuid":"67XNw1yiQpCxkwyztUJr2Q","index":"filebeat-6.6.0-2019.02.02"},{"type":"query_shard_exception","reason":"Failed to parse query [cat/indices?v]","index_uuid":"JphjXbVBQd6JpMPuQkSnGw","index":"filebeat-6.6.0-2019.02.03"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"filebeat-6.6.0-2019.02.02","node":"8dMqAA81SnCjXVDoTOAYnA","reason":{"type":"query_shard_exception","reason":"Failed to parse query [cat/indices?v]","index_uuid":"67XNw1yiQpCxkwyztUJr2Q","index":"filebeat-6.6.0-2019.02.02","caused_by":{"type":"parse_exception","reason":"Cannot parse 'cat/indices?v': Lexical error at line 1, column 14. Encountered: &lt;EOF&gt; after : \"/indices?v\"","caused_by":{"type":"token_mgr_error","reason":"Lexical error at line 1, column 14. Encountered: &lt;EOF&gt; after : \"/indices?v\""}}}},{"shard":0,"index":"filebeat-6.6.0-2019.02.03","node":"8dMqAA81SnCjXVDoTOAYnA","reason":{"type":"query_shard_exception","reason":"Failed to parse query [cat/indices?v]","index_uuid":"JphjXbVBQd6JpMPuQkSnGw","index":"filebeat-6.6.0-2019.02.03","caused_by":{"type":"parse_exception","reason":"Cannot parse 'cat/indices?v': Lexical error at line 1, column 14. Encountered: &lt;EOF&gt; after : \"/indices?v\"","caused_by":{"type":"token_mgr_error","reason":"Lexical error at line 1, column 14. Encountered: &lt;EOF&gt; after : \"/indices?v\""}}}}]},"status":400}

Then I refreshed the page and tried again. Now I only get this message x3 times instead:
3 of 6 shards failed

(I'm totally new to everything about this. I may have not done what you asked me to do.)

Head over to Dev Tools and run it again, https://www.elastic.co/guide/en/kibana/current/console-kibana.html has more info.

OK, I ran GET /_cat/indices?v in the Kibana Dev Tools > Console and got the following output:

health status index                       uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   auditbeat-6.6.0-2019.02.01  _usBl_rQTR-8UhsH9enOvg   3   1       8506            0      3.1mb          3.1mb
yellow open   metricbeat-2019.02.02       rChqHA_PRtKfJKWc1WThZQ   5   1      93892            0     46.9mb         46.9mb
yellow open   metricbeat-2019.02.01       GoZrn6znRGSPYeRy2RdCZg   5   1     100154            0     50.5mb         50.5mb
yellow open   metricbeat-6.6.0-2019.02.02 BtAHv-mITfmxrv46CrCGxw   1   1      20848            0      5.3mb          5.3mb
yellow open   filebeat-6.6.0-2019.02.03   JphjXbVBQd6JpMPuQkSnGw   3   1        483            0    536.4kb        536.4kb
yellow open   metricbeat-6.6.0-2019.02.03 FJYpd9iAQb6FnYDE5fFlcw   1   1      10924            0      2.9mb          2.9mb
yellow open   filebeat-6.6.0-2019.02.02   67XNw1yiQpCxkwyztUJr2Q   3   1      22808            0      6.9mb          6.9mb
yellow open   auditbeat-6.6.0-2019.01.31  cHFn9tUbTSWx_TkVJPInHw   3   1      16515            0      6.6mb          6.6mb
yellow open   metricbeat-6.6.0-2019.01.31 55_05mJlR6KdnKzUZO0oog   1   1      44368            0     11.5mb         11.5mb
green  open   .kibana_1                   -Wt46ABhTz2zDx0_9imrrQ   1   0        372            2    714.1kb        714.1kb
yellow open   auditbeat-6.6.0-2019.02.02  blBEXVOtQgC1t8-DPUGijQ   3   1      15352            0      5.2mb          5.2mb
yellow open   metricbeat-2019.01.31       E0NLqouuTI-vXHcOig7qxQ   5   1      32336            0     17.5mb         17.5mb
yellow open   auditbeat-6.6.0-2019.02.03  4ADbbsgASZCB5too--nGsw   3   1       1439            0    887.7kb        887.7kb

All beats have a "yellow" status...

It should be showing something, you can see there is filebeat indices for the last two days. What if you change your time frame to last 24 hours?

Therein lies the confusion.

It was happening to a colleague of mine as well who set up his ELK stack 6.6.0 a few days ago. Another colleague of ours who is familiar with ELK stack could not understand either why there was nothing showing in the Dashboard after looking at our config for about 20 minutes..

Did you enable the system module in filebeat?

Yessir

root@XX:/etc/filebeat# filebeat modules list
Enabled:
logstash
system

Disabled:
apache2
auditd
elasticsearch
haproxy
icinga
iis
kafka
kibana
mongodb
mysql
nginx
osquery
postgresql
redis
suricata
traefik

Created an Ubuntu 18 machine instead with the same problem: no dashboard results in Kibana for Filebeat but results are showing in the Discover tab with the filebeat-* filter

Another 6.6.0..

Just tried with elasticsearch, kibana, and filebeat all on 6.5.4 (Debian 9) with same problem as well..

Ok let's take a step back here.

Are you following docs, if so what ones? If not then what steps you are taking?

There's no single guide that works for me so I have to mix the official one(s) from here with blogs elsewhere on the Net. Each guide I find seems to be missing some step I need that another one provides. I'll just create a new Debian 9 VM and give you each command I do.

NOTE: I'm using Google Cloud Platform to host my VM's, which have firewall settings I need to configure to get things to work (particularly open TCP ports 5601 [Kibana] and 9200 [Elasticsearch] for my IP).

  1. Install JVM
    JVM is a prerequisite for the ELK stack. However, I plan to eventually use Logstash. This blog post and this Github issue indicate that the newest version of JVM (sudo apt-get install default-jre) is not compatible with Logstash currently, so you must install version 8. I therefore install version 8 through the command given in "This blog post" that I linked above:

    sudo apt install openjdk-8-jre apt-transport-https wget nginx

  2. Install & Start Elasticsearch
    I install Elasticsearch through the DEB install instructions on elastic.co:
    sudo wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

    sudo apt-get install apt-transport-https

    echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

    sudo apt-get update && sudo apt-get install elasticsearch

    sudo /bin/systemctl daemon-reload

    sudo /bin/systemctl enable elasticsearch.service

    sudo systemctl start elasticsearch.service

2a. Confirm Elasticsearch is running
curl -X GET "localhost:9200/"
Output:

{
  "name" : "SWlFGNk",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "dmQl0cSWTLqa62rsneMYHA",
  "version" : {
    "number" : "6.6.0",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "a9861f4",
    "build_date" : "2019-01-24T11:27:09.439740Z",
    "build_snapshot" : false,
    "lucene_version" : "7.6.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
  1. Install & Start Kibana
    I install Kibana through the DEB install instructions on elastic.co:
    sudo apt-get update && sudo apt-get install kibana

    sudo /bin/systemctl daemon-reload

    sudo /bin/systemctl enable kibana.service

    sudo systemctl start kibana.service

  2. Configure Kibana
    At this point, my_ip:5601 takes me to "This site can't be reached" (ERR_CONNECTION_REFUSED). I must make the change explained under "Install Kibana" in this blog post in order to access the Kibana UI:

    sudo nano /etc/kibana/kibana.yml

    File changes:
    1:
    #server.port: 5601 --> server.port: 5601
    2:
    #server.host: "localhost" --> server.host: "0.0.0.0"

  3. Restart Kibana service
    After restarting Kibana, I can access its UI through my_ip:5601:

    sudo systemctl restart kibana

  4. Install Filebeat
    sudo apt-get install filebeat

    sudo /bin/systemctl daemon-reload

    sudo /bin/systemctl enable filebeat.service

  5. Configure Filebeat
    sudo nano /etc/filebeat/filebeat.yml

    File changes:
    Change to true to enable this input configuration. enabled: false
    -->
    Change to true to enable this input configuration. enabled: true

  6. Set up Filebeat
    sudo filebeat setup

  7. Start Filebeat
    sudo systemctl start filebeat

  8. View Kibana Dashboard: [Filebeat System] Syslog dashboard

:neutral_face:
No results found

1 Like

First, thank you for the detail, and for formatting things. It really does help :smiley:

A few things;

  • I can't see that you have enabled a module for filebeat in your detailed steps, so it won't be processing any files. This doesn't match with your filebeat modules list above so I just wanted to double check that
  • in Step 7 you set an input to true, which input was this?
  • What do the filebeat logs show?

No problem, I hate messiness and I assume other people do too so I try to make things clean (and StackOverflow people bully me when I have bad formatting). Thanks for sticking with me with this.

  1. Oops, I forgot the modules. I just now enabled system and elasticsearch and restarted kibana, elasticsearch, filebeat:

    sudo filebeat modules enable system

    sudo filebeat modules enable elasticsearch

    sudo systemctl restart filebeat

    sudo systemctl restart elasticsearch

    sudo systemctl restart kibana

  2. Here's more context of what I set to true in /etc/filebeat/filebeat.yml:

    #=========================== Filebeat inputs =============================
    
     filebeat.inputs:
    
     # Each - is an input. Most options can be set at the input level, so
     # you can use different inputs for various configurations.
     # Below are the input specific configurations.
    
     - type: log
    
       # Change to true to enable this input configuration.
       enabled: true
    
       # Paths that should be crawled and fetched. Glob based paths.
       paths:
         - /var/log/*.log
         #- c:\programdata\elasticsearch\logs\*
    
  3. Here are all the ERROR lines I found in /var/log/filebeat/filebeat:

    2019-02-04T02:41:18.699Z        ERROR   fileset/factory.go:105  Error creating input: Can only start an input when all related states are finished: {Id:393257-2049 Finished:false Fileinfo:0xc4204692b0 Source:/var/log/auth.log Offset:16059 Timestamp:2019-02-04 02:41:18.687378819 +0000 UTC m=+0.054671057 TTL:-1ns Type:log Meta:map[] FileStateOS:393257-2049}
    2019-02-04T02:41:18.699Z        ERROR   [reload]        cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:393257-2049 Finished:false Fileinfo:0xc4204692b0  Source:/var/log/auth.log Offset:16059 Timestamp:2019-02-04 02:41:18.687378819 +0000 UTC m=+0.054671057 TTL:-1ns Type:log Meta:map[] FileStateOS:393257-2049}
    2019-02-04T02:41:38.632Z        ERROR   elasticsearch/client.go:319     Failed to perform any bulk index operations: Post http://localhost:9200/_bulk: dial tcp [::1]:9200: connect: connection refused
    2019-02-04T02:41:39.710Z        ERROR   pipeline/output.go:121  Failed to publish events: Post http://localhost:9200/_bulk: dial tcp [::1]:9200: connect: connection refused
    2019-02-04T02:41:43.562Z        ERROR   pipeline/output.go:100  Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get http://localhost:9200: dial tcp [::1]:9200: connect: connection refused
    2019-02-04T02:41:49.632Z        ERROR   pipeline/output.go:100  Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get http://localhost:9200: dial tcp 127.0.0.1:9200: connect: connection refused
    2019-02-04T02:41:58.853Z        ERROR   pipeline/output.go:100  Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get http://localhost:9200: dial tcp 127.0.0.1:9200: connect: connection refused
    2019-02-04T02:42:24.576Z        ERROR   pipeline/output.go:100  Failed to connect to backoff(elasticsearch(http://localhost:9200)): Get http://localhost:9200: dial tcp 127.0.0.1:9200: connect: connection refused
1 Like

Whatever. Just follow DigitalOcean's tutorial, guys. They often have the most competent and straight-forward server tutorials out there for a given issue. Their setup sends logs through Logstash:

(I just made a new Ubuntu 16 VM)

I haven't installed more beats yet, but the Filebeat dashboards are actually showing up.

I would like to add that I've followed the entirety of that DigitalOcean guide. I have also scoured all other documentation that I can find.

From what I've found, the problem is that Filebeat and Logstash don't play well together, but creating that filter in your logstash config is supposed to fix the problem. I've added that filter and am still getting "No results found :|" on my Filebeat Syslog dashboard, or any other Filebeat dashboard.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.