No matching indices found no indices match pattern filebeat-*

Hi All,
I am getting this error on kibana such that file beat has created, but kibana unable to fetch the data.
Kindly check my filebeat.yml code & input logsatsh code.
Please help me it's important.
it's my filebeat log file.
2018-07-12T16:42:52.524+0530 INFO registrar/registrar.go:127 States Loaded from registrar: 0
2018-07-12T16:42:52.525+0530 WARN beater/filebeat.go:354 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-07-12T16:42:52.525+0530 INFO crawler/crawler.go:48 Loading Inputs: 1
2018-07-12T16:42:52.525+0530 INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 0
2018-07-12T16:42:52.525+0530 INFO cfgfile/reload.go:122 Config reloader started
2018-07-12T16:42:52.525+0530 INFO cfgfile/reload.go:214 Loading of config files completed.
2018-07-12T16:42:53.943+0530 INFO [monitoring] log/log.go:124 Non-zero metrics in the last
Filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
  # Change to true to enable this input configuration.
  enabled: true
  # Paths that hould be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']
  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']
  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']
  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1
  ### Multiline options
  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation
  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[
  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false
  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml
  # Set to true to enable config reloading
  reload.enabled: false
  # Period on which files under path should be checked for changes
  #reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
setup.dashboards.enabled: true
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana.host: "192.168.2.230:5601"
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
 # hosts: ["192.168.2.230:9200"]
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.2.230:5044"]

Kindly check above things & give us the solution.
Thanks for the suggestions in advance.

Based on the logs it seems filebeat doesn't find any data under the defined paths. Did you start Filebeat before? What logs files are you trying to tail?

Thanks for the reply Bro..
Basically I am new to this,as of now I just kept *.log since its going to share every logs right!
If I am wrong correct me & give me a solution to resolve this problem.

Now I am getting this log in filebeat.
ppid": 4216, "seccomp": {"mode":""}, "start_time": "2018-07-13T11:52:20.760+0530"}}}
2018-07-13T11:52:21.568+0530 INFO instance/beat.go:225 Setup Beat: filebeat; Version: 6.3.1
2018-07-13T11:52:21.568+0530 INFO pipeline/module.go:81 Beat name: ganghadhar
2018-07-13T11:52:21.568+0530 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-07-13T11:52:21.568+0530 INFO kibana/client.go:90 Kibana url: http://192.168.2.230:5601
2018-07-13T11:52:49.623+0530 INFO instance/beat.go:607 Kibana dashboards successfully loaded.
2018-07-13T11:52:49.623+0530 INFO instance/beat.go:315 filebeat start running.
2018-07-13T11:52:49.623+0530 INFO registrar/registrar.go:116 Loading registrar data from /var/lib/filebeat/registry
2018-07-13T11:52:49.623+0530 INFO registrar/registrar.go:127 States Loaded from registrar: 16
2018-07-13T11:52:49.623+0530 WARN beater/filebeat.go:354 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-07-13T11:52:49.623+0530 INFO crawler/crawler.go:48 Loading Inputs: 1
2018-07-13T11:52:50.199+0530 INFO log/input.go:113 Configured paths: [/var/log/*.log]
2018-07-13T11:52:50.199+0530 INFO input/input.go:88 Starting input of type: log; ID: 11204088409762598069
2018-07-13T11:52:50.199+0530 INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 1
2018-07-13T11:52:50.199+0530 INFO cfgfile/reload.go:122 Config reloader started
2018-07-13T11:52:50.200+0530 INFO cfgfile/reload.go:214 Loading of config files completed.
2018-07-13T11:52:50.241+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.0.log
2018-07-13T11:52:50.283+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.1.log

Please reply of it,its emergency.

Bro logstash is not receiving the logs & on kibana its showing as:

Based on this line 2018-07-13T11:52:49.623+0530 INFO registrar/registrar.go:127 States Loaded from registrar: 16 it seems you started Filebeat before and now it continues reading your files. I assume the reason you don't see any logs is because there aren't any new logs. Unfortunately the log you sent above is less then 30s. Every 30s some stats are printed and there it's visible how many logs were shipped.

If you want to start from scratch again with shipping logs, you have to remove the registry file inside the data directory.

Can you also share your logstash config?

Thanks for the reply again & where can remove the registry file,could you please tell me the location for deleting ?
This is my config file of logstash:

input
{
beats
{
port => 5044
}
}

The filter part of this file is commented out to indicate that it

is optional.

filter {

}

output
{
elasticsearch
{
hosts => "192.168.2.230:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}

Hey Ruflin,
I just removed registry file & restarted the filebeat service.
Now I am getting this on kibana.

2018-07-13T14:51:10.540+0530 INFO crawler/crawler.go:48 Loading Inputs: 1
2018-07-13T14:51:10.541+0530 INFO log/input.go:113 Configured paths: [/var/log/*.log]
2018-07-13T14:51:10.541+0530 INFO input/input.go:88 Starting input of type: log; ID: 11204088409762598069
2018-07-13T14:51:10.541+0530 INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 1
2018-07-13T14:51:10.541+0530 INFO cfgfile/reload.go:122 Config reloader started
2018-07-13T14:51:10.542+0530 INFO cfgfile/reload.go:214 Loading of config files completed.
2018-07-13T14:51:10.543+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/pm-suspend.log
2018-07-13T14:51:10.553+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/wpa_supplicant.log
2018-07-13T14:51:10.553+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.9.log
2018-07-13T14:51:10.554+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.1.log
2018-07-13T14:51:10.649+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/anaconda.storage.log
2018-07-13T14:51:10.715+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/anaconda.ifcfg.log
2018-07-13T14:51:10.716+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/spice-vdagent.log
2018-07-13T14:51:10.716+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/anaconda.log
2018-07-13T14:51:10.716+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.0.log
2018-07-13T14:51:10.716+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/pm-powersave.log
2018-07-13T14:51:10.716+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/anaconda.yum.log
2018-07-13T14:51:10.716+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/dracut.log
2018-07-13T14:51:10.717+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/yum.log
2018-07-13T14:51:10.717+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/boot.log
2018-07-13T14:51:10.717+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.2.log

error in the filebeat logfile:

2018-07-13T14:55:06.749+0530 ERROR logstash/async.go:235 Failed to publish events caused by: write tcp 192.168.5.66:43840->192.168.2.230:5044: write: connection reset by peer
2018-07-13T14:55:07.749+0530 ERROR pipeline/output.go:92 Failed to publish events: write tcp 192.168.5.66:43840->192.168.2.230:5044: write: connection reset by peer

Ok, sounds like the data flow is working now.

The error you see seems to indicate that sometimes there are errors connecting to Logstash. Is there something like a load balancer inbetween Beats and Logstash? Try to run to command filebeat test output.

logstash: 192.168.2.230:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 192.168.2.230
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
Nginx I have installed,is it necessary?

You have nginx between Beats and LS? Best send data directly to LS, this should also get rid of the problem.

So I have to remove the ngnix server right?

That would be my recommendation. What is the reason you put it there?

In some other site it was mentioned that we must set up a reverse proxy to allow external access to it.For that reason I have installed & configured.

Could please suggest me of adding the n number of clients with the different index name of filebeat. Since i have configured the client pc for transfering logs but I am unable to locate logs of client in kibana dashboard.
Could you please suggest me in adding of clients?
I am able to see only two beat,could please suggest how to resolve this?
Logs of the filebeat clent.

018-07-17T12:14:50.723+0530 INFO crawler/crawler.go:48 Loading Inputs: 1
2018-07-17T12:14:50.724+0530 INFO log/input.go:113 Configured paths: [/var/log/*.log]
2018-07-17T12:14:50.724+0530 INFO input/input.go:88 Starting input of type: log; ID: 11204088409762598069
2018-07-17T12:14:50.739+0530 INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 1
2018-07-17T12:14:50.739+0530 INFO cfgfile/reload.go:122 Config reloader started
2018-07-17T12:14:50.739+0530 INFO cfgfile/reload.go:214 Loading of config files completed.
2018-07-17T12:14:50.836+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/alternatives.log
2018-07-17T12:14:50.837+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/bootstrap.log
2018-07-17T12:14:50.836+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.1.log
2018-07-17T12:14:50.836+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/fontconfig.log
2018-07-17T12:14:51.132+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/apport.log
2018-07-17T12:14:51.132+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/boot.log
2018-07-17T12:14:51.132+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/dpkg.log
2018-07-17T12:14:51.133+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/gpu-manager.log
2018-07-17T12:14:51.133+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/kern.log
2018-07-17T12:14:51.133+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/xrdp-sesman.log
2018-07-17T12:14:51.133+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.0.log
2018-07-17T12:14:51.133+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.2.log
2018-07-17T12:14:51.133+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/auth.log

So the initial problem is resolved? If yes, best open a new topic to not confuse the two.

Okay thanks for the suggestion, I am going to open the new topic.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.