****** filebeat configuration on client server **************************
[root@omsappbuild filebeat]# cat filebeat.yml
################### Filebeat Configuration Example #########################
############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log/*/*.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
- /var/log/secure
- /var/log/messages
# - /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Configure the file encoding for reading files with international characters
# following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
# Some sample encodings:
# plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
# hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
#encoding: plain
# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one prospector
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in
input_type: syslog
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
# exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# Set to true to store the additional fields as top level fields instead
# of under the "fields" sub-dictionary. In case of name conflicts with the
# fields added by Filebeat itself, the custom fields overwrite the default
# fields.
#fields_under_root: false
# Ignore files which were modified more then the defined timespan in the past.
# In case all files on your system must be read you can set this value very large.
# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#ignore_older: 0
# Close older closes the file handler for which were not modified
# for longer then close_older
# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#close_older: 1h
# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: log
#document_type: log
# Scan frequency in seconds.
# How often these files should be checked for changes. In case it is set
# to 0s, it is done as often as possible. Default: 10s
#scan_frequency: 10s
# Defines the buffer size every harvester uses when fetching the file
#harvester_buffer_size: 16384
# Kibana is served by a back end server. This setting specifies the port to use.
# server.port: 5601
# This setting specifies the IP address of the back end server.
server.host: "elkserver_private ip"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting
# cannot end in a slash.
# server.basePath: ""
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://elkserver_private_ip:9200"
# kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana4-server"
elasticsearch.password: "xxxx"
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# files enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.cert: absolute path of certificate
server.ssl.key: absolute path of key
shield.encryptionKey: 'xxxx'
please let us know if you need further info and help me in getting this issue resolved.
@Ruflin I ran filebeat on client servers where logs to be monitored I got few errors related to yaml file on space issue but I fixed it and was able to restart the filebeat .
e.g error message : Loading config file error: YAML config parsing failed on /etc/filebeat/filebeat.yml: yaml: line 212: did not find expected key. Exiting.
I even tried with topbeat I get same error.Not sure why
curl -XGET 'http://privateip:9200/filebeat-/_search?pretty' -u es_admin
Enter host password for user 'es_admin':
{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"index" : "[filebeat-]"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"index" : "[filebeat-*]"
},
"status" : 404
I don't see any logstash output configured in your Filebeat configuration file.
You have two elasticsearch outputs in your Logstash configuration, but only one of them is configured with your Shield username and password.
The elasticsearch output that is configured with a username and password is missing options like index and document_type.
If this is your first time setting up Beats -> Logstash -> Elasticsearch, I really recommend starting simple and incrementally building up the configuration. It will be much easier to debug.
Setup Filebeat to Logstash in isolation without TLS. Disable any elasticsearch outputs and use only the stdout output. Once you are seeing data on the Logstash console then add in TLS between Filebeat and Logstash. And then add in your filters.
Then with Filebeat to Logstash working, setup your elasticsearch output from Logstash.
After it's all working, you can stop Filebeat, delete the Filebeat registry, delete any indices created in Elasticsearch, and then restart Filebeat to reindex the logs with your final setup.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.