Couldn't see the logs in elasticsearch and kibana


#1

when we send the logs using filebeat does it create a separate index for filebeat in elastic search and kibana. Or do we need to install the beats dashboard to create the index?


(Magnus Bäck) #2

Or do we need to install the beats dashboard to create the index?

No.


#3

/14 18:16:59.148519 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
/14 18:16:59.182519 output.go:164: INFO Loading template enabled. Trying to load template: filebeat.template.json

/14 18:16:59.237519 client.go:266: INFO Elasticsearch template with name 'filebeat' loaded
/14 18:16:59.237519 outputs.go:126: INFO Activated elasticsearch as output plugin.
/14 18:16:59.238519 publish.go:232: DBG Create output worker
/14 18:16:59.238519 publish.go:274: DBG No output is defined to store the topology. The server fields might not
ed.
/14 18:16:59.248519 async.go:78: INFO Flush Interval set to: 1s
/14 18:16:59.249519 async.go:84: INFO Max Bulk Size set to: 50
/14 18:16:59.249519 async.go:92: DBG create bulk processing worker (interval=1s, bulk size=50)
/14 18:16:59.251519 beat.go:147: INFO Init Beat: filebeat; Version: 1.2.3
/14 18:16:59.253519 beat.go:173: INFO filebeat sucessfully setup. Start running.
/14 18:16:59.254519 registrar.go:68: INFO Registry file set to: C:\ProgramData\filebeat\registry
/14 18:16:59.257519 registrar.go:80: INFO Loading registrar data from C:\ProgramData\filebeat\registry
/14 18:16:59.257519 prospector.go:133: INFO Set ignore_older duration to 0
/14 18:16:59.258519 prospector.go:133: INFO Set close_older duration to 1h0m0s
/14 18:16:59.258519 prospector.go:133: INFO Set scan_frequency duration to 10s
/14 18:16:59.259519 prospector.go:93: INFO Input type set to: log
/14 18:16:59.260519 prospector.go:133: INFO Set backoff duration to 1s
/14 18:16:59.261519 prospector.go:133: INFO Set max_backoff duration to 10s
/14 18:16:59.262519 prospector.go:113: INFO force_close_file is disabled
/14 18:16:59.262519 prospector.go:143: INFO Starting prospector of type: log
/14 18:16:59.258519 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
/14 18:16:59.265519 crawler.go:78: INFO All prospectors initialised with 0 states to persist
/14 18:16:59.265519 registrar.go:87: INFO Starting Registrar
/14 18:16:59.266519 publish.go:88: INFO Start sending events to output


#4

i dont see any index created on the elk server


#5
filebeat:

  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -
      paths:
        #- /var/log/*.log
         - C:\filebeattesting\log\*.log
output:

  ### Elasticsearch as output
  elasticsearch:
    hosts: ["50.40.30.125:9200"]`indent preformatted text by 4 spaces`

(Magnus Bäck) #6

Please edit your post and format your configuration as code with the </> button so that we can see the indentation. The indentation matters in YAML.


#7

indent preformatted text by 4 spaces

############################# Filebeat ######################################
filebeat:

List of prospectors to fetch data.

prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log//.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
#- /var/log/*.log
- C:\filebeattesting\log*.log
#- c:\programdata\elasticsearch\logs*

  # Configure the file encoding for reading files with international characters
  # following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
  # Some sample encodings:
  #   plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
  #    hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
  #encoding: plain

  # Type of the files. Based on this the way the file is read is decided.
  # The different types cannot be mixed in one prospector
  #
  # Possible options are:
  # * log: Reads every line of the log file (default)
  # * stdin: Reads the standard in
  input_type: log

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list. The include_lines is called before
  # exclude_lines. By default, no lines are dropped.
  # exclude_lines: ["^DBG"]

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list. The include_lines is called before
  # exclude_lines. By default, all the lines are exported.
  # include_lines: ["^ERR", "^WARN"]

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  # exclude_files: [".gz$"]

  # Optional additional fields. These field can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  # Set to true to store the additional fields as top level fields instead
  # of under the "fields" sub-dictionary. In case of name conflicts with the
  # fields added by Filebeat itself, the custom fields overwrite the default
  # fields.
  #fields_under_root: false

  # Ignore files which were modified more then the defined timespan in the past.
  # In case all files on your system must be read you can set this value very large.
  # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
  #ignore_older: 0

indent preformatted text by 4 spaces
registry_file: "C:/ProgramData/filebeat/registry"

output:

Elasticsearch as output

elasticsearch:
hosts: ["50.40.30.125:9200"]

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "admin"
#password: "s3cr3t"

# Number of workers per Elasticsearch host.
#worker: 1

# Optional index name. The default is "filebeat" and generates
# [filebeat-]YYYY.MM.DD keys.
#index: "filebeat"

# A template is used to set the mapping in Elasticsearch
# By default template loading is disabled and no template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones
template:

  # Template name. By default the template name is filebeat.
  name: "filebeat"

  # Path to template file
  path: "filebeat.template.json"`indent preformatted text by 4 spaces`

(Magnus Bäck) #8

Your post is still mangled so that we can't see exactly what your file looks like.


#9

filebeat:

  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -
      paths:
        #- /var/log/*.log
         - C:\filebeattesting\log\*.log
output:

  ### Elasticsearch as output
  elasticsearch:
    hosts: ["50.40.30.125:9200"]`indent preformatted text by 4 spaces`

(Steffen Siering) #10

put your path in single quotes or use \\ or /.

e.g.

  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -
      paths:
        #- /var/log/*.log
         - 'C:\filebeattesting\log\*.log'

yaml interprets \ character as escape, basically removing it.


(system) #11

This topic was automatically closed after 21 days. New replies are no longer allowed.