Hi!
I'm using ELK6.3.2 Logstash, Elastic Search, Kibana and Filebeat. All the services are running with No Errors. I see that the Filebeat is able to recognize my configured path and the new log files but I don't see any indices in my Kibana- ElasticSearch.
Portion of my filebeat.yml:
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
- c:\queued\*.log
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
Below is the portion of my filebeat logs. I had 80+ files in my log folder
`
2018-11-13T13:40:39.169-0700 INFO instance/beat.go:492 Home path: [C:\elk-6.3.2\filebeat] Config path: [C:\elk-6.3.2\filebeat] Data path: [C:\ProgramData\filebeat] Logs path: [C:\ProgramData\filebeat\logs]
2018-11-13T13:40:39.414-0700 INFO instance/beat.go:499 Beat UUID: 4622d2b3-f73b-44f7-8018-4bbbbc84943d
2018-11-13T13:40:39.414-0700 INFO [beat] instance/beat.go:716 Beat info {"system_info": {"beat": {"path": {"config": "C:\elk-6.3.2\filebeat", "data": "C:\ProgramData\filebeat", "home": "C:\elk-6.3.2\filebeat", "logs": "C:\ProgramData\filebeat\logs"}, "type": "filebeat", "uuid": "4622d2b3-f73b-44f7-8018-4bbbbc84943d"}}}
2018-11-13T13:40:39.414-0700 INFO [beat] instance/beat.go:725 Build info {"system_info": {"build": {"commit": "45a9a9e1561b6c540e94211ebe03d18abcacae55", "libbeat": "6.3.2", "time": "2018-07-20T04:17:39.000Z", "version": "6.3.2"}}}
2018-11-13T13:40:39.458-0700 INFO [beat] instance/beat.go:728 Go runtime info {"system_info": {"go": {"os":"windows","arch":"amd64","max_procs":4,"version":"go1.9.4"}}}
2018-11-13T13:40:39.620-0700 INFO instance/beat.go:225 Setup Beat: filebeat; Version: 6.3.2
2018-11-13T13:40:39.647-0700 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-11-13T13:40:39.658-0700 INFO pipeline/module.go:81 Beat name: DNNNSM6
2018-11-13T13:40:39.685-0700 INFO instance/beat.go:315 filebeat start running.
2018-11-13T13:40:39.685-0700 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-11-13T13:40:39.685-0700 INFO registrar/registrar.go:117 Loading registrar data from C:\ProgramData\filebeat\registry
2018-11-13T13:40:39.687-0700 INFO registrar/registrar.go:124 States Loaded from registrar: 83
2018-11-13T13:40:39.687-0700 INFO crawler/crawler.go:48 Loading Inputs: 1
2018-11-13T13:40:50.492-0700 INFO log/input.go:118 Configured paths: [c:\queued*.log]
2018-11-13T13:40:50.492-0700 INFO input/input.go:88 Starting input of type: log; ID: 10762938565121747981
2018-11-13T13:40:50.545-0700 INFO log/harvester.go:228 Harvester started for file: c:\queued\dnnnsm6_2018-11-10_12-54-55.log
2018-11-13T13:40:50.583-0700 INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 1
2018-11-13T13:40:50.583-0700 INFO cfgfile/reload.go:122 Config reloader started
2018-11-13T13:40:50.647-0700 INFO cfgfile/reload.go:214 Loading of config files completed.
|2018-11-13T13:40:50.651-0700|INFO|log/harvester.go:228|Harvester started for file: c:\queued\dnnnsm6_2018-11-10_12-56-55.log|
|2018-11-13T13:40:51.040-0700|INFO|log/harvester.go:228|Harvester started for file: c:\queued\dnnnsm6_2018-11-11_11-45-54.log|
2018-11-13T13:45:09.719-0700 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1796,"time":{"ms":125}},"total":{"ticks":2342,"time":{"ms":125},"value":2342},"user":{"ticks":546}},"info":{"ephemeral_id":"38d47951-bccf-48f3-a46b-6de80d791474","uptime":{"ms":271970}},"memstats":{"gc_next":6127920,"memory_alloc":4371592,"memory_total":27829632,"rss":8192}},"filebeat":{"harvester":{"open_files":42,"running":42}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":83}}}}}
2018-11-13T13:45:39.690-0700 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1953,"time":{"ms":157}},"total":{"ticks":2531,"time":{"ms":189},"value":2531},"user":{"ticks":578,"time":{"ms":32}}},"info":{"ephemeral_id":"38d47951-bccf-48f3-a46b-6de80d791474","uptime":{"ms":301941}},"memstats":{"gc_next":6375184,"memory_alloc":3822384,"memory_total":29787472,"rss":12288}},"filebeat":{"harvester":{"open_files":42,"running":42}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":83}}}}}
2018-11-13T13:45:56.259-0700 INFO log/harvester.go:253 File is inactive: c:\queued\dnnnsm6_2018-11-10_12-54-55.log. Closing because close_inactive of 5m0s reached.
2018-11-13T13:45:56.262-0700 INFO log/harvester.go:253 File is inactive: c:\queued\dnnnsm6_2018-11-10_12-56-55.log. Closing because close_inactive of 5m0s reached.
2018-11-13T13:45:56.503-0700 INFO log/harvester.go:253 File is inactive: c:\queued\dnnnsm6_2018-11-11_11-45-54.log. Closing because close_inactive of 5m0s reached.
`
The sequence of harvester loading the files, log monitoring and files being closed due to inactivity repeats in the log file with no Index in Kibana.
What am I missing here? Please Help!
Thanks!