Kibana dashboard No result found

Hi,

I've installed elasticsearch, kibana, on server. Installed Filebeat on ubuntu client.

For a reason (couldn't find logstash index, didn't knew it will have filebeat name, forgot about logstash for now) I deleted the filebeat index and restarted filebeat.

I can see the logs in here

But no data in the tables (dashboards)

Steps taken to solve this:
1- I've deleted all file beat indexes from index management and the index pattern.
2- deleted all saved objects.
3- Restarted file beat.

Passed with a lot of messages like this:

"42a0a200-937d-11ea-acab-57651ac68b99" is not a configured index pattern ID

Showing the default index pattern: "filebeat-*" (caf59a70-937d-11ea-acab-57651ac68b99)

"caf59a70-937d-11ea-acab-57651ac68b99" is not a configured index pattern ID

Showing the default index pattern: "filebeat-" (filebeat-)

"f9673f30-9427-11ea-acab-57651ac68b99" is not a configured index pattern ID

Showing the default index pattern: "filebeat-*" (c19c1370-942a-11ea-acab-57651ac68b99)

Still no luck on this.

Any help in this please?

Regards.

1 Like

Seems like the beat setup routine is still stuck in some kind of invalid state. If there is nothing of value stored in the Kibana instance, you could stop Kibana and the beat, delete the .kibana alias and all .kibana_* indices, then start Kibana again and when it's up and running, start the beat again.

Sorry I'm new to ELK. This command should delete the alias and indices as you advice?

curl -XDELETE 'http://192.168.2.220:9200/kibana-*'

Please let me know.

It's kibana_*, but otherwise yes

Still no luck on this, but I can see two filebeat index pattern:

How can we investigate on this more?

  • Do you have multiple beats running?
  • Could you share the config for your beats and the command you ran?
  • What version of Kibana are you running?
  • Just to make sure - this is the order of the steps:
    • stopping Kibana and beats
    • deleting the kibana indices
    • starting Kibana again (verifying everything is gone)
    • starting beats
  • I'm not 100% sure, but maybe the beat installation is holding local state somehow? Could you clear all temporary directories?

1- No I don't have multiple beats

2- Here is the config for the beat:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "192.168.2.220:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["http://192.168.2.220:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

3- Kibana version 7.6.2

4- Order of steps:

Configured kibana.yml:

server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.2.220"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.2.220:9200"]

(Nothing changed after this part in kibana.yml)

5- I'm not 100% sure, but maybe the beat installation is holding local state somehow? Could you clear all temporary directories?

I've reverted the machine to the base ubuntu installation, installed filebeat and set config file as listed above.

Thanks

Ah, I think I see your problem, its .kibana* (notice the dot in front). Could you try to remove those as well? If you start Kibana after doing this, there should be no index patterns and no other saved objects listed. Sorry for the confusion.

Hi,

Thanks for following up, I' ve deleted the kibana indices

  • I' ve stopped Kibana and beats
  • deleted the kibana indices: curl -XDELETE http://192.168.2.220:9200/.kibana*
  • starting Kibana again (verified everything is gone)
  • started beats

But unfortunately the filebeat index is not loading anymore.

I've reverted the beats ubuntu machine, reinstalled filebeats, configured .yml, started it but no filebeat index appears in kibana.

Here is the filebeat.yml

###################### Filebeat Configuration Example #########################

#Removed for character limit
#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

# Removed for char limit

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
#Removed for char limit in discussion 

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "http://192.168.2.220:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["http://192.168.2.220:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"
Nothing changed after this

Also here the output of indexes from dev console:

Here are some of the logs from filebeat

amdin@amdin-virtual-machine:~$ journalctl --unit=filebeat -f
-- Logs begin at Wed 2020-03-25 12:38:30 EET. --
May 13 11:05:21 amdin-virtual-machine filebeat[3405]: 2020-05-13T11:05:21.846+0300        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":40},"total":{"ticks":280,"time":{"ms":16},"value":280},"user":{"ticks":240,"time":{"ms":16}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":9},"info":{"ephemeral_id":"37ea3a38-a1b6-45ae-bf4b-3dbf20ed2eeb","uptime":{"ms":900039}},"memstats":{"gc_next":9688320,"memory_alloc":4852784,"memory_total":22531448},"runtime":{"goroutines":16}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.05,"15":0.17,"5":0.07,"norm":{"1":0.05,"15":0.17,"5":0.07}}}}}}

Here is the status of kibana on ELK server:

ubuntu@elk:~$ sudo service kibana status
[sudo] password for ubuntu:
● kibana.service - Kibana
     Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2020-05-13 10:17:36 EEST; 54min ago
   Main PID: 770 (node)
      Tasks: 11 (limit: 19004)
     Memory: 821.4M
     CGroup: /system.slice/kibana.service
             └─770 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

May 13 11:10:08 elk kibana[770]: {"type":"response","@timestamp":"2020-05-13T08:10:08Z","tags":[],"pid":770,"method":"post","statusCode":200,"req":{"url":"/api/index_management/indices/reload","method":"post","headers":{"host":"192.168.2.220:5601","connection":"keep-ali>



lines 1-19/19 (END)

Here are the logs from kibana on server:

ubuntu@elk:~$ journalctl --unit=kibana -f
-- Logs begin at Tue 2020-05-05 11:26:52 EEST. --
May 13 11:11:38 elk kibana[770]: {"type":"response","@timestamp":"2020-05-13T08:11:38Z","tags":["access:console"],"pid":770,"method":"post","statusCode":200,"req":{"url":"/api/console/proxy?path=_aliases&method=GET","method":"post","headers":{"host":"192.168.2.220:5601","connection":"keep-alive","content-length":"0","accept":"text/plain, */*; q=0.01","kbn-version":"7.6.2","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36","origin":"http://192.168.2.220:5601","referer":"http://192.168.2.220:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.9"},"remoteAddress":"192.168.2.52","userAgent":"192.168.2.52","referer":"http://192.168.2.220:5601/app/kibana"},"res":{"statusCode":200,"responseTime":14,"contentLength":9},"message":"POST /api/console/proxy?path=_aliases&method=GET 200 14ms - 9.0B"}
May 13 11:11:38 elk kibana[770]: {"type":"response","@timestamp":"2020-05-13T08:11:38Z","tags":["access:console"],"pid":770,"method":"post","statusCode":200,"req":{"url":"/api/console/proxy?path=_mapping&method=GET","method":"post","headers":{"host":"192.168.2.220:5601","connection":"keep-alive","content-length":"0","accept":"text/plain, */*; q=0.01","kbn-version":"7.6.2","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36","origin":"http://192.168.2.220:5601","referer":"http://192.168.2.220:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.9"},"remoteAddress":"192.168.2.52","userAgent":"192.168.2.52","referer":"http://192.168.2.220:5601/app/kibana"},"res":{"statusCode":200,"responseTime":29,"contentLength":9},"message":"POST /api/console/proxy?path=_mapping&method=GET 200 29ms - 9.0B"}
May 13 11:11:38 elk kibana[770]: {"type":"response","@timestamp":"2020-05-13T08:11:38Z","tags":[],"pid":770,"method":"post","statusCode":200,"req":{"url":"/api/index_management/indices/reload","method":"post","headers":{"host":"192.168.2.220:5601","connection":"keep-alive","content-length":"17","kbn-version":"7.6.2","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36","content-type":"application/json","accept":"*/*","origin":"http://192.168.2.220:5601","referer":"http://192.168.2.220:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.9"},"remoteAddress":"192.168.2.52","userAgent":"192.168.2.52","referer":"http://192.168.2.220:5601/app/kibana"},"res":{"statusCode":200,"responseTime":14,"contentLength":9},"message":"POST /api/index_management/indices/reload 200 14ms - 9.0B"}
May 13 11:11:38 elk kibana[770]: {"type":"response","@timestamp":"2020-05-13T08:11:38Z","tags":["access:console"],"pid":770,"method":"post","statusCode":200,"req":{"url":"/api/console/proxy?path=_template&method=GET","method":"post","headers":{"host":"192.168.2.220:5601","connection":"keep-alive","content-length":"0","accept":"text/plain, */*; q=0.01","kbn-version":"7.6.2","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36","origin":"http://192.168.2.220:5601","referer":"http://192.168.2.220:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.9"},"remoteAddress":"192.168.2.52","userAgent":"192.168.2.52","referer":"http://192.168.2.220:5601/app/kibana"},"res":{"statusCode":200,"responseTime":31,"contentLength":9},"message":"POST /api/console/proxy?path=_template&method=GET 200 31ms - 9.0B"}

Here is the status of elasticsearch on server:

ubuntu@elk:~$ sudo service elasticsearch status
● elasticsearch.service - Elasticsearch
     Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/elasticsearch.service.d
             └─override.conf
     Active: active (running) since Wed 2020-05-13 10:17:55 EEST; 56min ago
       Docs: http://www.elastic.co
   Main PID: 763 (java)
      Tasks: 110 (limit: 19004)
     Memory: 8.8G
     CGroup: /system.slice/elasticsearch.service
             β”œβ”€ 763 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUn>
             └─1064 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

May 13 10:51:51 elk elasticsearch[763]: ]*)))?(InnoDB_queue_wait: (?<NUMBER:mysql.slowlog.innodb.queue_wait.sec:float>(?:(?:(?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+))))))(?:([ #
May 13 10:51:51 elk elasticsearch[763]: ]*)))?(InnoDB_pages_distinct: (?<NUMBER:mysql.slowlog.innodb.pages_distinct:long>(?:(?:(?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+))))))(?:([ #
May 13 10:51:51 elk elasticsearch[763]: ]*)))?(Log_slow_rate_type: (?<WORD:mysql.slowlog.log_slow_rate_type>\b\w+\b)(?:([ #
May 13 10:51:51 elk elasticsearch[763]: ]*)))?(Log_slow_rate_limit: (?<NUMBER:mysql.slowlog.log_slow_rate_limit:long>(?:(?:(?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+))))))(?:([ #
May 13 10:51:51 elk elasticsearch[763]: ]*)))?(?:(# explain:.*
May 13 10:51:51 elk elasticsearch[763]: |#\s*
May 13 10:51:51 elk elasticsearch[763]: )*)?(use (?<WORD:mysql.slowlog.schema>\b\w+\b);
May 13 10:51:51 elk elasticsearch[763]: )?SET timestamp=(?<NUMBER:mysql.slowlog.timestamp:long>(?:(?:(?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+))))));
May 13 10:51:51 elk elasticsearch[763]: (?<GREEDYMULTILINE:mysql.slowlog.query>(.|
May 13 10:51:51 elk elasticsearch[763]: )*)/

Please help in this I've been struggling since days.

Regards,

It looks like filebeat is not sending data to Elasticsearch. Please check the logs of the beat, it will likely contain a hint to the problem.

Hi,

Yes I've checked the filebeat.yml, edited this part to true and now the logs are sent to elasticsearch.

- type: log

  # Change to true to enable this input configuration.
  enabled: false (changed to true)

But still the tables don't show anything neither events from SEIM.

1- Is this because I set the dashboards (sudo filebeat setup --dashboards) from my side where they are set up automatically so no need to do them from my side?

2- Is this because I didn't installed filebeats modules, they are a Must?

Or is there anything else?

Here is part of filebeat logs:

amdin@amdin-virtual-machine:~$ journalctl --unit=filebeat -f
-- Logs begin at Wed 2020-03-25 12:38:30 EET. --
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.296+0300        INFO        [index-management.ilm]        ilm/std.go:139        do not generate ilm policy: exists=true, overwrite=false
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.296+0300        INFO        [index-management]        idxmgmt/std.go:271        ILM policy successfully loaded.
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.296+0300        INFO        [index-management]        idxmgmt/std.go:410        Set setup.template.name to '{filebeat-7.6.2 {now/d}-000001}' as ILM is enabled.
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.296+0300        INFO        [index-management]        idxmgmt/std.go:415        Set setup.template.pattern to 'filebeat-7.6.2-*' as ILM is enabled.
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.297+0300        INFO        [index-management]        idxmgmt/std.go:449        Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.6.2 {now/d}-000001} as ILM is enabled.
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.297+0300        INFO        [index-management]        idxmgmt/std.go:453        Set settings.index.lifecycle.name in template to {filebeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.299+0300        INFO        template/load.go:89        Template filebeat-7.6.2 already exists and will not be overwritten.
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.300+0300        INFO        [index-management]        idxmgmt/std.go:295        Loaded index template.
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.644+0300        INFO        [index-management]        idxmgmt/std.go:306        Write alias successfully generated.
May 13 12:17:22 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:22.664+0300        INFO        pipeline/output.go:105        Connection to backoff(elasticsearch(http://192.168.2.220:9200)) established
May 13 12:17:48 amdin-virtual-machine filebeat[4920]: 2020-05-13T12:17:48.251+0300        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":160,"time":{"ms":165}},"total":{"ticks":290,"time":{"ms":298},"value":290},"user":{"ticks":130,"time":{"ms":133}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":24},"info":{"ephemeral_id":"94251c67-a6e0-4d82-9e6a-a39d2d673bcd","uptime":{"ms":30055}},"memstats":{"gc_next":22033904,"memory_alloc":14765896,"memory_total":32108248,"rss":58568704},"runtime":{"goroutines":94}},"filebeat":{"events":{"added":1896,"done":1896},"harvester":{"files":{"0172a8b4-4475-42ee-adc8-db2470ecdb4b":{"last_event_published_time":"2020-05-13T12:17:21.266Z","last_event_timestamp":"2020-05-13T12:17:21.264Z","name":"/var/log/vmware-vmtoolsd-root.log","read_offset":522,"size":522,"start_time":"2020-05-13T12:17:18.264Z"},"32240ca2-74c7-4f54-9f3e-fc1c34304173":{"last_event_published_time":"2020-05-13T12:17:29.277Z","last_event_timestamp":"2020-05-13T12:17:29.277Z","name":"/var/log/auth.log","read_offset":5376,"size":5032,"start_time":"2020-05-13T12:17:18.266Z"},"45c6754f-a096-4f5a-905d-f1357834f969":{"last_event_published_time":"2020-05-13T12:17:21.297Z","last_event_timestamp":"2020-05-13T12:17:21.296Z","name":"/var/log/vmware-vmsvc-root.1.log","read_offset":10155,"size":10155,"start_time":"2020-05-13T12:17:18.246Z"},"877f3d78-e374-44f9-9d21-2a738bc28e5e":{"last_event_published_time":"2020-05-13T12:17:21.292Z","last_event_timestamp":"2020-05-13T12:17:21.292Z","name":"/var/log/fontconfig.log","read_offset":5873,"size":5873,"start_time":"2020-05-13T12:17:18.266Z"},"8c84e524-a0d4-4b34-9b91-637d04838925":{"last_event_published_time":"2020-05-13T12:17:21.250Z","last_event_timestamp":"2020-05-13T12:17:21.250Z","name":"/var/log/vmware-network.log","read_offset":3211,"size":3211,"start_time":"2020-05-13T12:17:18.265Z"},"987901bf-bda9-4c2c-a7ce-db52450b4c54":{"last_event_published_time":"2020-05-13T12:17:21.304Z","last_event_timestamp":"2020-05-13T12:17:21.304Z","name":"/var/log/bootstrap.log","read_offset":56751,"size":56751,"start_time":"2020-05-13T12:17:18.268Z"},"a4a04d7d-ab34-45f5-aa07-3294edd56066":{"last_event_published_time":"2020-05-13T12:17:36.302Z","last_event_timestamp":"2020-05-13T12:17:36.302Z","name":"/var/log/vmware-vmsvc-root.log","read_offset":31095,"size":30941,"start_time":"2020-05-13T12:17:18.261Z"},"a4ca167e-0d07-4952-9c2b-db52d716b6cb":{"last_event_published_time":"2020-05-13T12:17:21.253Z","last_event_timestamp":"2020-05-13T12:17:21.252Z","name":"/var/log/vmware-network.2.log","read_offset":685,"size":685,"start_time":"2020-05-13T12:17:18.264Z"},"a7e321af-51d6-4d17-8a31-f7794121ec4f":{"last_event_published_time":"2020-05-13T12:17:21.294Z","last_event_timestamp":"2020-05-13T12:17:21.294Z","name":"/var/log/vmware-network.1.log","read_offset":3211,"size":3211,"start_time":"2020-05-13T12:17:18.268Z"},"af44e01c-1e0b-4e3f-aaec-7e07a1b639c6":{"last_event_published_time":"2020-05-13T12:17:21.245Z","last_event_timestamp":"2020-05-13T12:17:21.245Z","name":"/var/log/kern.log","read_offset":5636,"size":5636,"start_time":"2020-05-13T12:17:18.266Z"},"b34efc54-ca12-4fd5-b42a-a4301786817b":{"last_event_published_time":"2020-05-13T12:17:21.289Z","last_event_timestamp":"2020-05-13T12:17:21.289Z","name":"/var/log/gpu-manager.log","read_offset":1163,"size":1163,"start_time":"2020-05-13T12:17:18.246Z"},"b4ddd1d8-84a7-4b40-8796-f4cd600364e9":{"last_event_published_time":"2020-05-13T12:17:21.289Z","last_event_timestamp":"2020-05-13T12:17:21.289Z","name":"/var/log/dpkg.log","read_offset":1535,"size":1535,"start_time":"2020-05-13T12:17:18.260Z"},"bcaee23f-de79-4729-878d-b0c049c97e3d":{"last_event_published_time":"2020-05-13T12:17:21.304Z","last_event_timestamp":"2020-05-13T12:17:21.304Z","name":"/var/log/vmware-vmsvc-root.2.log","read_offset":6113,"size":6113,"start_time":"2020-05-13T12:17:18.260Z"},"cded127f-e23c-444b-a074-1c6336b0ada1":{"last_event_published_time":"2020-05-13T12:17:21.284Z","last_event_timestamp":"2020-05-13T12:17:21.282Z","name":"/var/log/alternatives.log","read_offset":3485,"size":3485,"start_time":"2020-05-13T12:17:18.246Z"}},"open_files":14,"running":14,"started":14}},"libbeat":{"config":{"module":{"running":0},"reloads":1,"scans":1},"output":{"events":{"acked":1818,"batches":39,"total":1818},"read":{"bytes":26755},"type":"elasticsearch","write":{"bytes":1411798}},"pipeline":{"clients":1,"events":{"active":0,"filtered":78,"published":1818,"retry":50,"total":1896},"queue":{"acked":1818}}},"registrar":{"states":{"current":14,"update":1896},"writes":{"success":52,"total":52}},"system":{"cpu":{"cores":1},"load":{"1":0.32,"15":0.04,"5":0.14,"norm":{"1":0.32,"15":0.04,"5":0.14}}}}}}

Your help is appreciated.

Thanks

It looks like Kibana is in a consistent state again (yay for that), but the data you are expecting is not available. I'm no expert on filebeat, but maybe the right modules are not enabled or there are no logs containing this data?

As this part of your problem seems to be related to the configuration of the beat, it's probably the best to ask this question in the beats forum.

Ok many thanks for your help

Sorry, is this dashboard explicit to using modules? For this reason I'm not getting values?

What dashboard should I use for non module, I only enabled the general log values:
paths:
- /var/log/*.log

Regards,

1 Like

I’m not sure as this is beats specific - I’m sure you will get help about that in the beats forum as well

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.