NGINX harvester finds and loads log files, but no publish output

I've scoured the internet for the next step, but I'm just stumped. I am unable to get my NGINX logs back to the elastic cloud hosted Kibana instance, I am running Ubuntu and the deb install of Filebeat - No docker, all install locations are default and /var/log/filebeat shows no issues.

I have configured Filebeat to monitor: System, NGINX & Docker

Docker and system metrics are coming through just fine, Docker Filebeat shows up in the Discover panel with filebeat-* , nginx searches return 0 records.

Tailing the filebeat logs I can see it successfully connect to elastic cloud, find the access & error logs and attach a harvester, but watching the publish events it never appears..

Example access log entry from NGINX:
38.11.23.212 - - [04/May/2018:14:17:35 +0100] "GET /images/icons/favicon-32x32.png?v=2 HTTP/1.1" 200 1871 "https://www.mysite.com/en" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.0 Safari/537.36"

Partial output of Filebeat Service log (/var/log/filebeat/filebeat) upon restart:

2018-05-04T14:34:53.313+0100    INFO    log/prospector.go:111   Configured paths: [/var/log/messages* /var/log/syslog*]
2018-05-04T14:34:53.313+0100    INFO    cfgfile/reload.go:258   Starting 2 runners ...
2018-05-04T14:34:53.313+0100    INFO    elasticsearch/client.go:145     Elasticsearch url: https://3f9eb2f0a10df75d64fe38ea87.us-east-1.aws.found.io:443
2018-05-04T14:34:57.279+0100    INFO    elasticsearch/client.go:690     Connected to Elasticsearch version 6.2.4
2018-05-04T14:34:57.579+0100    INFO    elasticsearch/client.go:145     Elasticsearch url: https://3f9eb2f0a10df75d64fe38ea87.us-east-1.aws.found.io:443
2018-05-04T14:34:57.961+0100    INFO    elasticsearch/client.go:690     Connected to Elasticsearch version 6.2.4
2018-05-04T14:34:58.161+0100    INFO    cfgfile/reload.go:219   Loading of config files completed.
2018-05-04T14:34:58.162+0100    INFO    log/harvester.go:216    Harvester started for file: /var/log/syslog
2018-05-04T14:35:08.162+0100    INFO    log/harvester.go:216    Harvester started for file: /var/log/auth.log
2018-05-04T14:35:23.263+0100    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":40,"time":44},"total":{"ticks":250,"time":260,"value":250},"user":{"ticks":210,"time":216}},"info":{"ephemeral_id":"96a6c6c2-f447-467c-8555-2f5cf4191","uptime":{"ms":30016}},"memstats":{"gc_next":5751376,"memory_alloc":2941032,"memory_total":19559744,"rss":24596480}},"filebeat":{"events":{"added":31,"done":31},"harvester":{"open_files":2,"running":2,"started":2}},"libbeat":{"config":{"module":{"running":2,"starts":2},"reloads":1},"output":{"type":"elasticsearch"},"pipeline":{"clients":9,"events":{"active":0,"filtered":31,"total":31}}},"registrar":{"states":{"current":14,"update":31},"writes":31},"system":{"cpu":{"cores":8},"load":{"1":0,"15":0.07,"5":0.05,"norm":{"1":0,"15":0.0088,"5":0.0063}}}}}}
2018-05-04T14:35:27.582+0100    INFO    log/harvester.go:216    Harvester started for file: /var/log/nginx/access.log
2018-05-04T14:35:33.276+0100    INFO    log/harvester.go:216    Harvester started for file: /var/lib/docker/containers/c05c69485dc9c1b71a4f4ea608c62a012c99c3d52a75ceee3e7b70cf2a478/c05ccfc69485dc9c1b71a4f4ea608c62a012c99c3d52ceee3e7b70cf2a478-json.log
2018-05-04T14:35:34.688+0100    INFO    elasticsearch/client.go:690     Connected to Elasticsearch version 6.2.4
2018-05-04T14:35:34.787+0100    INFO    template/load.go:73     Template already exists and will not be overwritten.
2018-05-04T14:35:43.276+0100    INFO    log/harvester.go:216    Harvester started for file: /var/lib/docker/containers/87812af90758ac12805909b5c929160586039ec4eb3c03d870c34f61a1b/878122c6f8af90758ac12805909b5c929160039ec4eb3c03d870c34f61a1b-json.log
2018-05-04T14:35:53.263+0100    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":70,"time":80},"total":{"ticks":390,"time":400,"value":390},"user":{"ticks":320,"time":320}},"info":{"ephemeral_id":"96a6c6c2-f447-467c-8555-2f5cf4191","uptime":{"ms":60017}},"memstats":{"gc_next":7249808,"memory_alloc":6035304,"memory_total":25774824,"rss":1314816}},"filebeat":{"events":{"active":20,"added":141,"done":121},"harvester":{"open_files":5,"running":5,"started":3}},"libbeat":{"config":{"module":{"running":2}},"output":{"events":{"acked":50,"active":10,"batches":5,"total":60},"read":{"bytes":8148},"write":{"bytes":126024}},"pipeline":{"clients":9,"events":{"active":20,"filtered":71,"published":70,"retry":3,"total":141},"queue":{"acked":50}}},"registrar":{"states":{"current":14,"update":121},"writes":75},"system":{"load":{"1":0.11,"15":0.08,"5":0.08,"norm":{"1":0.0138,"15":0.01,"5":0.01}}}}}}

My config :

filebeat.prospectors:

- type: log
  # Disabled as using the new docker module
  # Change to true to enable this prospector configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/lib/docker/containers/*/*.log

- type: docker
  containers.ids:
    - '*'
  json.keys_under_root: true
  json.add_error_key: true
  json.message_key: message
processors:
  - add_docker_metadata: ~

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

setup.template.settings:
  index.number_of_shards: 3

I have also tried hardcoding the module glob to "/etc/filebeat/modules.d/*.yml" in case the path.config param was wrong (Though checking my service definition in init.d it looked fine).

So my questions are:

  • How can I test this further?
  • Where should I be looking for errors?
  • Am I doing the right thing posting logs direct to Kibana instead of Logstash (Which I've yet to look into)?

Appreciate any help you can give me.

From the Kibana Dev Tools -> Console, what does this query return?

GET filebeat-*/_search?q=fileset.module:nginx&size=0

The first thing I would do with Filebeat is enable debug logging to see if this provides any more information. It will be quite verbose, but it should show just about everything that's happening. In your config you can add this and then restart.

logging.level: debug

You are sending them direct to Elasticsearch, not Kibana. And this is completely normal. It sounds like you don't need Logstash at this point.

Thanks for the quick reply! I noticed that the first query you gave me returned a very small amount of records so that led me on to my current thinking that I've introduced a config issue:

Setting the level to debug I see a few messages like this:
2018-05-04T17:31:26.526+0100 DEBUG [add_docker_metadata] add_docker_metadata/add_docker_metadata.go:111 Error while extracting container ID from source path: index is out of range for field 'source'

So I disabled the docker plugin completely, and now nginx logs were flying in! Success!

I then re-enabled the basic docker plugin and everything was working still, but once I enabled the processor those errors started showing up.

processors:
  - add_docker_metadata: ~

So the processor must be parsing the nginx module too and not finding the correct fields and just stopping the logging completely. I did the following and now we're cooking on gas :smiley:

processors:
  - add_docker_metadata:
      when:
        contains:
          prospector.type: "docker"

The config that I copied verbatim without understanding for setting up the docker meta data processor came from here: https://www.elastic.co/blog/brewing-in-beats-add-docker-metadata-to-logs

So lesson learned, but I find strange that a processor specifically for docker would attempt to parse anything but that which was from the docker logger.

Thank you for your quick assistance, you were crucial to solving this!

I think it's a result of where you defined the processor in your config file. The processor was defined at the global level which causes it to be applied to all outgoing events.

By indenting it by two spaces it becomes a child of the docker log prospector and will only be applied to events generated by that prospector.

- type: docker
  containers.ids:
    - '*'
  json.keys_under_root: true
  json.add_error_key: true
  json.message_key: message
  processors:                      # <- Indented this line.
  - add_docker_metadata: ~

Thank Andrew, you were right - I should have tried that first, I just didn't see that in the docs options.

I will add submit a pull request to the docs in case this helps someone in the future.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.