Hi All,
I am getting this error on kibana such that file beat has created, but kibana unable to fetch the data.
Kindly check my filebeat.yml code & input logsatsh code.
Please help me it's important.
it's my filebeat log file.
2018-07-12T16:42:52.524+0530 INFO registrar/registrar.go:127 States Loaded from registrar: 0
2018-07-12T16:42:52.525+0530 WARN beater/filebeat.go:354 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-07-12T16:42:52.525+0530 INFO crawler/crawler.go:48 Loading Inputs: 1
2018-07-12T16:42:52.525+0530 INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 0
2018-07-12T16:42:52.525+0530 INFO cfgfile/reload.go:122 Config reloader started
2018-07-12T16:42:52.525+0530 INFO cfgfile/reload.go:214 Loading of config files completed.
2018-07-12T16:42:53.943+0530 INFO [monitoring] log/log.go:124 Non-zero metrics in the last
Filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that hould be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
setup.dashboards.enabled: true
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana.host: "192.168.2.230:5601"
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["192.168.2.230:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.2.230:5044"]
Kindly check above things & give us the solution.
Thanks for the suggestions in advance.
Based on the logs it seems filebeat doesn't find any data under the defined paths. Did you start Filebeat before? What logs files are you trying to tail?
Thanks for the reply Bro..
Basically I am new to this,as of now I just kept *.log since its going to share every logs right!
If I am wrong correct me & give me a solution to resolve this problem.
Now I am getting this log in filebeat.
ppid": 4216, "seccomp": {"mode":""}, "start_time": "2018-07-13T11:52:20.760+0530"}}}
2018-07-13T11:52:21.568+0530 INFO instance/beat.go:225 Setup Beat: filebeat; Version: 6.3.1
2018-07-13T11:52:21.568+0530 INFO pipeline/module.go:81 Beat name: ganghadhar
2018-07-13T11:52:21.568+0530 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-07-13T11:52:21.568+0530 INFO kibana/client.go:90 Kibana url: http://192.168.2.230:5601
2018-07-13T11:52:49.623+0530 INFO instance/beat.go:607 Kibana dashboards successfully loaded.
2018-07-13T11:52:49.623+0530 INFO instance/beat.go:315 filebeat start running.
2018-07-13T11:52:49.623+0530 INFO registrar/registrar.go:116 Loading registrar data from /var/lib/filebeat/registry
2018-07-13T11:52:49.623+0530 INFO registrar/registrar.go:127 States Loaded from registrar: 16
2018-07-13T11:52:49.623+0530 WARN beater/filebeat.go:354 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-07-13T11:52:49.623+0530 INFO crawler/crawler.go:48 Loading Inputs: 1
2018-07-13T11:52:50.199+0530 INFO log/input.go:113 Configured paths: [/var/log/*.log]
2018-07-13T11:52:50.199+0530 INFO input/input.go:88 Starting input of type: log; ID: 11204088409762598069
2018-07-13T11:52:50.199+0530 INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 1
2018-07-13T11:52:50.199+0530 INFO cfgfile/reload.go:122 Config reloader started
2018-07-13T11:52:50.200+0530 INFO cfgfile/reload.go:214 Loading of config files completed.
2018-07-13T11:52:50.241+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.0.log
2018-07-13T11:52:50.283+0530 INFO log/harvester.go:228 Harvester started for file: /var/log/Xorg.1.log
Based on this line 2018-07-13T11:52:49.623+0530 INFO registrar/registrar.go:127 States Loaded from registrar: 16 it seems you started Filebeat before and now it continues reading your files. I assume the reason you don't see any logs is because there aren't any new logs. Unfortunately the log you sent above is less then 30s. Every 30s some stats are printed and there it's visible how many logs were shipped.
If you want to start from scratch again with shipping logs, you have to remove the registry file inside the data directory.
Thanks for the reply again & where can remove the registry file,could you please tell me the location for deleting ?
This is my config file of logstash:
input
{
beats
{
port => 5044
}
}
The filter part of this file is commented out to indicate that it
The error you see seems to indicate that sometimes there are errors connecting to Logstash. Is there something like a load balancer inbetween Beats and Logstash? Try to run to command filebeat test output.
logstash: 192.168.2.230:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 192.168.2.230
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
Nginx I have installed,is it necessary?
Could please suggest me of adding the n number of clients with the different index name of filebeat. Since i have configured the client pc for transfering logs but I am unable to locate logs of client in kibana dashboard.
Could you please suggest me in adding of clients?
I am able to see only two beat,could please suggest how to resolve this?
Logs of the filebeat clent.
018-07-17T12:14:50.723+0530
INFO
crawler/crawler.go:48
Loading Inputs: 1
2018-07-17T12:14:50.724+0530
INFO
log/input.go:113
Configured paths: [/var/log/*.log]
2018-07-17T12:14:50.724+0530
INFO
input/input.go:88
Starting input of type: log; ID: 11204088409762598069
2018-07-17T12:14:50.739+0530
INFO
crawler/crawler.go:82
Loading and starting Inputs completed. Enabled inputs: 1
2018-07-17T12:14:50.739+0530
INFO
cfgfile/reload.go:122
Config reloader started
2018-07-17T12:14:50.739+0530
INFO
cfgfile/reload.go:214
Loading of config files completed.
2018-07-17T12:14:50.836+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/alternatives.log
2018-07-17T12:14:50.837+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/bootstrap.log
2018-07-17T12:14:50.836+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/Xorg.1.log
2018-07-17T12:14:50.836+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/fontconfig.log
2018-07-17T12:14:51.132+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/apport.log
2018-07-17T12:14:51.132+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/boot.log
2018-07-17T12:14:51.132+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/dpkg.log
2018-07-17T12:14:51.133+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/gpu-manager.log
2018-07-17T12:14:51.133+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/kern.log
2018-07-17T12:14:51.133+0530
INFO
log/harvester.go:228
Harvester started for file: /var/log/xrdp-sesman.log
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.