I've scoured the internet for the next step, but I'm just stumped. I am unable to get my NGINX logs back to the elastic cloud hosted Kibana instance, I am running Ubuntu and the deb install of Filebeat - No docker, all install locations are default and /var/log/filebeat shows no issues.
I have configured Filebeat to monitor: System, NGINX & Docker
Docker and system metrics are coming through just fine, Docker Filebeat shows up in the Discover panel with filebeat-* , nginx searches return 0 records.
Tailing the filebeat logs I can see it successfully connect to elastic cloud, find the access & error logs and attach a harvester, but watching the publish events it never appears..
Example access log entry from NGINX:
38.11.23.212 - - [04/May/2018:14:17:35 +0100] "GET /images/icons/favicon-32x32.png?v=2 HTTP/1.1" 200 1871 "https://www.mysite.com/en" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.0 Safari/537.36"
Partial output of Filebeat Service log (/var/log/filebeat/filebeat) upon restart:
2018-05-04T14:34:53.313+0100 INFO log/prospector.go:111 Configured paths: [/var/log/messages* /var/log/syslog*]
2018-05-04T14:34:53.313+0100 INFO cfgfile/reload.go:258 Starting 2 runners ...
2018-05-04T14:34:53.313+0100 INFO elasticsearch/client.go:145 Elasticsearch url: https://3f9eb2f0a10df75d64fe38ea87.us-east-1.aws.found.io:443
2018-05-04T14:34:57.279+0100 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.4
2018-05-04T14:34:57.579+0100 INFO elasticsearch/client.go:145 Elasticsearch url: https://3f9eb2f0a10df75d64fe38ea87.us-east-1.aws.found.io:443
2018-05-04T14:34:57.961+0100 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.4
2018-05-04T14:34:58.161+0100 INFO cfgfile/reload.go:219 Loading of config files completed.
2018-05-04T14:34:58.162+0100 INFO log/harvester.go:216 Harvester started for file: /var/log/syslog
2018-05-04T14:35:08.162+0100 INFO log/harvester.go:216 Harvester started for file: /var/log/auth.log
2018-05-04T14:35:23.263+0100 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":40,"time":44},"total":{"ticks":250,"time":260,"value":250},"user":{"ticks":210,"time":216}},"info":{"ephemeral_id":"96a6c6c2-f447-467c-8555-2f5cf4191","uptime":{"ms":30016}},"memstats":{"gc_next":5751376,"memory_alloc":2941032,"memory_total":19559744,"rss":24596480}},"filebeat":{"events":{"added":31,"done":31},"harvester":{"open_files":2,"running":2,"started":2}},"libbeat":{"config":{"module":{"running":2,"starts":2},"reloads":1},"output":{"type":"elasticsearch"},"pipeline":{"clients":9,"events":{"active":0,"filtered":31,"total":31}}},"registrar":{"states":{"current":14,"update":31},"writes":31},"system":{"cpu":{"cores":8},"load":{"1":0,"15":0.07,"5":0.05,"norm":{"1":0,"15":0.0088,"5":0.0063}}}}}}
2018-05-04T14:35:27.582+0100 INFO log/harvester.go:216 Harvester started for file: /var/log/nginx/access.log
2018-05-04T14:35:33.276+0100 INFO log/harvester.go:216 Harvester started for file: /var/lib/docker/containers/c05c69485dc9c1b71a4f4ea608c62a012c99c3d52a75ceee3e7b70cf2a478/c05ccfc69485dc9c1b71a4f4ea608c62a012c99c3d52ceee3e7b70cf2a478-json.log
2018-05-04T14:35:34.688+0100 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.4
2018-05-04T14:35:34.787+0100 INFO template/load.go:73 Template already exists and will not be overwritten.
2018-05-04T14:35:43.276+0100 INFO log/harvester.go:216 Harvester started for file: /var/lib/docker/containers/87812af90758ac12805909b5c929160586039ec4eb3c03d870c34f61a1b/878122c6f8af90758ac12805909b5c929160039ec4eb3c03d870c34f61a1b-json.log
2018-05-04T14:35:53.263+0100 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":70,"time":80},"total":{"ticks":390,"time":400,"value":390},"user":{"ticks":320,"time":320}},"info":{"ephemeral_id":"96a6c6c2-f447-467c-8555-2f5cf4191","uptime":{"ms":60017}},"memstats":{"gc_next":7249808,"memory_alloc":6035304,"memory_total":25774824,"rss":1314816}},"filebeat":{"events":{"active":20,"added":141,"done":121},"harvester":{"open_files":5,"running":5,"started":3}},"libbeat":{"config":{"module":{"running":2}},"output":{"events":{"acked":50,"active":10,"batches":5,"total":60},"read":{"bytes":8148},"write":{"bytes":126024}},"pipeline":{"clients":9,"events":{"active":20,"filtered":71,"published":70,"retry":3,"total":141},"queue":{"acked":50}}},"registrar":{"states":{"current":14,"update":121},"writes":75},"system":{"load":{"1":0.11,"15":0.08,"5":0.08,"norm":{"1":0.0138,"15":0.01,"5":0.01}}}}}}
My config :
filebeat.prospectors:
- type: log
# Disabled as using the new docker module
# Change to true to enable this prospector configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/lib/docker/containers/*/*.log
- type: docker
containers.ids:
- '*'
json.keys_under_root: true
json.add_error_key: true
json.message_key: message
processors:
- add_docker_metadata: ~
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
setup.template.settings:
index.number_of_shards: 3
I have also tried hardcoding the module glob to "/etc/filebeat/modules.d/*.yml" in case the path.config param was wrong (Though checking my service definition in init.d it looked fine).
So my questions are:
- How can I test this further?
- Where should I be looking for errors?
- Am I doing the right thing posting logs direct to Kibana instead of Logstash (Which I've yet to look into)?
Appreciate any help you can give me.