Filebeat stop to harvest for nginx log [Solved]

Hi everyone,

I'm trying to use ELK to analyze Nginx access.log, but when I run Filebeat, I get a few hours of logging until Filebeat enters in an infinite loop where it says:
"Harvester for file is still running ..."

Filebeat version

$ filebeat version

filebeat version 6.6.0 (amd64), libbeat 6.6.0

Operating System

$ cat /etc/issue

Ubuntu 16.04.4 LTS

Configuration

$ sudo cat /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  ignore_older: 48
  clean_inactive: 72
  clean_removed: true

  enabled: true

  paths:
    - /var/log/nginx/access.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 3

setup.dashboards.enabled: false

setup.kibana:
  host: "10.10.0.134:8080"

output.logstash:
  hosts: ["10.10.0.128:5044"]

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

$ sudo cat /etc/filebeat/modules.d/nginx.yml

- module: nginx
  access:
    enabled: true
  error:
    enabled: false

Nginx logrotate

$ sudo cat /etc/logrotate.d/nginx

/var/log/nginx/*.log {
  daily
  missingok
  rotate 14
  compress
  delaycompress
  notifempty
  create 0640 www-data adm
  sharedscripts
  prerotate
    if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
      run-parts /etc/logrotate.d/httpd-prerotate; \
    fi \
  endscript
  postrotate
    invoke-rc.d nginx rotate >/dev/null 2>&1
  endscript
}

Debug info

$ sudo filebeat -e -d "*" -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat

...
    "containerized": false,
    "name": "ip-10-10-2-214",
    "architecture": "x86_64",
    "os": {
      "family": "debian",
      "name": "Ubuntu",
      "codename": "xenial",
      "platform": "ubuntu",
      "version": "16.04.4 LTS (Xenial Xerus)"
    }
  },
  "beat": {
    "name": "ip-10-10-2-214",
    "hostname": "ip-10-10-2-214",
    "version": "6.6.0"
  },
  "source": "/var/log/nginx/access.log",
  "offset": 529532
}
DEBUG	[input]	input/input.go:152	Run input
DEBUG	[input]	log/input.go:174	Start next scan
DEBUG	[input]	log/input.go:404	Check file for harvesting: /var/log/nginx/access.log
DEBUG	[input]	log/input.go:494	Update existing file for harvesting: /var/log/nginx/access.log, offset: 529645
DEBUG	[input]	log/input.go:546	Harvester for file is still running: /var/log/nginx/access.log
DEBUG	[input]	log/input.go:195	input states cleaned up. Before: 1, After: 1, Pending: 1
DEBUG	[input]	input/input.go:152	Run input
DEBUG	[input]	log/input.go:174	Start next scan
DEBUG	[input]	log/input.go:255	Exclude file: /var/log/nginx/access.log.10.gz
                                    ...
DEBUG	[input]	log/input.go:255	Exclude file: /var/log/nginx/access.log.9.gz
DEBUG	[input]	log/input.go:404	Check file for harvesting: /var/log/nginx/access.log.1
DEBUG	[input]	log/input.go:494	Update existing file for harvesting: /var/log/nginx/access.log.1, offset: 245557
DEBUG	[input]	log/input.go:546	Harvester for file is still running: /var/log/nginx/access.log.1
DEBUG	[input]	log/input.go:404	Check file for harvesting: /var/log/nginx/access.log
DEBUG	[input]	log/input.go:494	Update existing file for harvesting: /var/log/nginx/access.log, offset: 263515
DEBUG	[input]	log/input.go:546	Harvester for file is still running: /var/log/nginx/access.log
DEBUG	[input]	log/input.go:195	input states cleaned up. Before: 2, After: 2, Pending: 0
DEBUG	[input]	input/input.go:152	Run input
DEBUG	[input]	log/input.go:174	Start next scan
DEBUG	[input]	log/input.go:404	Check file for harvesting: /var/log/nginx/access.log
DEBUG	[input]	log/input.go:494	Update existing file for harvesting: /var/log/nginx/access.log, offset: 529645
DEBUG	[input]	log/input.go:546	Harvester for file is still running: /var/log/nginx/access.log
DEBUG	[input]	log/input.go:195	input states cleaned up. Before: 1, After: 1, Pending: 1
DEBUG	[input]	input/input.go:152	Run input
DEBUG	[input]	log/input.go:174	Start next scan
DEBUG	[input]	log/input.go:255	Exclude file: /var/log/nginx/access.log.10.gz
                                    ...
DEBUG	[input]	log/input.go:255	Exclude file: /var/log/nginx/access.log.9.gz
DEBUG	[input]	log/input.go:404	Check file for harvesting: /var/log/nginx/access.log
DEBUG	[input]	log/input.go:494	Update existing file for harvesting: /var/log/nginx/access.log, offset: 263515
DEBUG	[input]	log/input.go:546	Harvester for file is still running: /var/log/nginx/access.log
DEBUG	[input]	log/input.go:404	Check file for harvesting: /var/log/nginx/access.log.1
DEBUG	[input]	log/input.go:494	Update existing file for harvesting: /var/log/nginx/access.log.1, offset: 245557
DEBUG	[input]	log/input.go:546	Harvester for file is still running: /var/log/nginx/access.log.1
DEBUG	[input]	log/input.go:195	input states cleaned up. Before: 2, After: 2, Pending: 0
INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":400,"time":    {"ms":400}},"total":{"ticks":1010,"time":{"ms":1016},"value":1010},"user":{"ticks":610,"time":    {"ms":616}}},"handles":{"limit":{"hard":1048576,"soft":1024},"open":9},"info":    {"ephemeral_id":"0ad99bff-ef64-4b7a-b488-db0c3635912a","uptime":    {"ms":30013}},"memstats":{"gc_next":36515312,"memory_alloc":29067496,"memory_total":82606960,"rss":59797504}},"fil    ebeat":{"events":{"active":4120,"added":4125,"done":5},"harvester":    
{"open_files":3,"running":3,"started":3}},"libbeat":{"config":{"module":    
{"running":0},"reloads":1},"output":{"events":{"active":2104,"batches":3,"total":2104},"read":    
{"bytes":30},"type":"logstash","write":{"bytes":244874}},"pipeline":{"clients":3,"events":    
{"active":4118,"filtered":6,"published":4116,"retry":1025,"total":4124}}},"registrar":{"states":        
{"current":1,"update":5},"writes":{"success":5,"total":5}},"system":{"cpu":{"cores":16},"load":    
{"1":4.35,"15":4.28,"5":4.44,"norm":{"1":0.2719,"15":0.2675,"5":0.2775}}}}}}
DEBUG	[input]	input/input.go:152	Run input
...

So it happens that I receive a few hours of data from the log

05

then, in the filebeat log, an infinite loop starts in which I see the messages repeat:

DEBUG [input] log / input.go: 546 Harvester for file is still running: /var/log/nginx/access.log
DEBUG [input] log / input.go: 195 input states cleaned up. Before: 2, After: 2, Pending: 0

I hope to receive some suggestions to solve this problem.
Thanks in advance
Lucio

Hi everyone,
finally I solved and I share the solution hoping it can be useful to others.
Reading filebeat docs , I realized that the problem could be in logstash and then I also analyzed its log:

FORBIDDEN/12/index read-only / allow delete (api)]

so I found the answer in this Topic:

i think my problem is low storage. just check your storage first. when it's low, kibana auto changes its config to read-only mode.

So I solved my problem by increasing disk space on the elasticsearch host and removing the read-only mode on the indices by running in kibana:

PUT _settings
{
  "index": {
    "blocks": {
      "read_only_allow_delete": "false"
    }
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.