Good day,
I'm new to Elastic and I have a single node cluster of elastic running with winlog beat sending data to elastic but when I try to configure a filebeat to send data to elastic it does not seem to communicate. see file beat.yml test config .
2016/07/26 18:55:33.924412 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2016/07/26 18:55:33.924412 logstash.go:106: INFO Max Retries set to: 3
2016/07/26 18:55:33.924412 client.go:100: DBG connect
2016/07/26 18:55:33.925417 outputs.go:126: INFO Activated logstash as output plugin.
2016/07/26 18:55:33.925417 file.go:39: INFO File output base filename set to: filebeat
2016/07/26 18:55:33.925417 file.go:50: INFO Rotate every bytes set to: 10485760
2016/07/26 18:55:33.925417 file.go:57: INFO Number of files set to: 7
2016/07/26 18:55:33.925417 outputs.go:126: INFO Activated file as output plugin.
2016/07/26 18:55:33.925417 publish.go:232: DBG Create output worker
2016/07/26 18:55:33.925417 publish.go:232: DBG Create output worker
2016/07/26 18:55:33.925417 publish.go:274: DBG No output is defined to store the topology. The server fields might not be filled.
2016/07/26 18:55:33.925917 publish.go:288: INFO Publisher name: FLMIR-ACABRERA
2016/07/26 18:55:33.933982 async.go:78: INFO Flush Interval set to: 1s
2016/07/26 18:55:33.933982 async.go:84: INFO Max Bulk Size set to: 2048
2016/07/26 18:55:33.933982 async.go:92: DBG create bulk processing worker (interval=1s, bulk size=2048)
2016/07/26 18:55:33.933982 async.go:78: INFO Flush Interval set to: -1ms
2016/07/26 18:55:33.933982 async.go:84: INFO Max Bulk Size set to: -1
2016/07/26 18:55:33.933982 beat.go:147: INFO Init Beat: filebeat; Version: 1.2.3
2016/07/26 18:55:33.934990 beat.go:173: INFO filebeat sucessfully setup. Start running.
2016/07/26 18:55:33.934990 registrar.go:68: INFO Registry file set to: C:\ProgramData\filebeat\registry
2016/07/26 18:55:33.934990 registrar.go:80: INFO Loading registrar data from C:\ProgramData\filebeat\registry
2016/07/26 18:55:33.934990 spooler.go:44: DBG Set idleTimeoutDuration to 5s
2016/07/26 18:55:33.934990 crawler.go:38: DBG File Configs: [C:\Filebeat\log*.txt]
2016/07/26 18:55:33.934990 prospector.go:133: INFO Set ignore_older duration to 0
2016/07/26 18:55:33.934990 prospector.go:133: INFO Set close_older duration to 1h0m0s
2016/07/26 18:55:33.934990 prospector.go:133: INFO Set scan_frequency duration to 10s
2016/07/26 18:55:33.934990 prospector.go:93: INFO Input type set to: log
2016/07/26 18:55:33.934990 prospector.go:133: INFO Set backoff duration to 1s
2016/07/26 18:55:33.934990 prospector.go:133: INFO Set max_backoff duration to 10s
2016/07/26 18:55:33.934990 prospector.go:113: INFO force_close_file is disabled
2016/07/26 18:55:33.934990 crawler.go:58: DBG Waiting for 1 prospectors to initialise
2016/07/26 18:55:33.934990 prospector.go:143: INFO Starting prospector of type: log
2016/07/26 18:55:33.934990 prospector.go:161: DBG exclude_files: []
2016/07/26 18:55:33.934990 prospector.go:261: DBG scan path C:\Filebeat\log*.txt
2016/07/26 18:55:33.934990 prospector.go:261: DBG scan path C:\Filebeat\log*.txt
2016/07/26 18:55:33.934990 crawler.go:65: DBG No pending prospectors. Finishing setup
2016/07/26 18:55:33.934990 crawler.go:78: INFO All prospectors initialised with 0 states to persist
2016/07/26 18:55:33.934990 registrar.go:87: INFO Starting Registrar
2016/07/26 18:55:33.934990 publish.go:88: INFO Start sending events to output
2016/07/26 18:55:33.934990 service_windows.go:49: DBG Windows is interactive: true
2016/07/26 18:55:33.935989 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/07/26 18:55:36.436962 spooler.go:97: DBG Flushing spooler because of timeout. Events flushed: 0
And this is my filebeat configuration file aggregated so it can fit
filebeat:
# List of prospectors to fetch data.
prospectors:
- C:\Filebeat\log\*.txt
encoding: utf-8
input_type: log
document_type: log
registry_file: "C:/ProgramData/filebeat/registry"
utput:
### Elasticsearch as output
elasticsearch:
hosts: ["192.168.110.100:9200"]
index: "filebeat"
File as output
file:
# Path to the directory where to save the generated files. The option is mandatory.
path: C:\filebeat\Logs\outPut
# Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
filename: filebeat
logging:
rotateeverybytes: 10485760 # = 10MB
###################### Winlogbeat Configuration Example ##########################
# This file is an example configuration file highlighting only the most common
# options. The winlogbeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/winlogbeat/index.html
#======================= Winlogbeat specific options ==========================
# event_logs specifies a list of event logs to monitor as well as any
# accompanying options. The YAML data type of event_logs is a list of
# dictionaries.
#
# The supported keys are name (required), tags, fields, fields_under_root,
# forwarded, ignore_older, level, event_id, provider, and include_xml. Please
# visit the documentation for the complete details of each option.
# https://go.es.io/WinlogbeatConfig
winlogbeat.event_logs:
- name: Application
- name: Security
- name: System
- name: Active Directory
- name: DFS Replication
- name: Directory Service
- name: DNS Server
- name: File Replication Service
- name: Key Management Service
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["DC"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["192.168.110.100:9200"]
# Template name. By default the template name is winlogbeat.
#template.name: "winlogbeat"
# Path to template file
#template.path: C:\winlogbeat5\winlogbeat.template.json
# Overwrite existing template
#template.overwrite: True
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.110.100:5044"]
# Template name. By default the template name is winlogbeat.
template.name: "winlogbeat"
# Overwrite existing template
template.overwrite: false
# Path to template file
template.path: C:\winlogbeat5\winlogbeat.template.json
# Optional TLS. By default is off.
# List of root certificates for HTTPS server verifications
#tls.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for TLS client authentication
#tls.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#tls.certificate_key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is error.
# Available log levels are: critical, error, warning, info, debug
logging.level: debug
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.