Hi,
I'm experiencing the following error some seconds after starting filebeat and I can't understand the reason.
...
2019-08-22T12:08:13.494Z INFO instance/beat.go:280 Setup Beat: filebeat; Version: 6.8.1
2019-08-22T12:08:13.498Z INFO [publisher] pipeline/module.go:110 Beat name: guia-app
2019-08-22T12:08:13.498Z INFO instance/beat.go:402 filebeat start running.
2019-08-22T12:08:13.498Z INFO registrar/registrar.go:134 Loading registrar data from /var/lib/filebeat/registry
2019-08-22T12:08:13.499Z INFO [monitoring] log/log.go:117 Starting metrics logging every 30s
2019-08-22T12:08:13.500Z INFO registrar/registrar.go:141 States Loaded from registrar: 1
2019-08-22T12:08:13.500Z WARN beater/filebeat.go:367 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-08-22T12:08:13.500Z INFO crawler/crawler.go:72 Loading Inputs: 1
2019-08-22T12:08:13.503Z INFO log/input.go:148 Configured paths: [/guia_instances/homologacao/*.log]
2019-08-22T12:08:13.503Z INFO input/input.go:114 Starting input of type: log; ID: 9579166981946736038
2019-08-22T12:08:13.505Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2019-08-22T12:08:13.506Z INFO add_cloud_metadata/add_cloud_metadata.go:345 add_cloud_metadata: hosting provider type detected as ec2, metadata={"availability_zone":"sa-east-1a","instance_id":"i-6db0bcee","machine_type":"r4.xlarge","provider":"ec2","region":"sa-east-1"}
2019-08-22T12:08:13.505Z INFO cfgfile/reload.go:150 Config reloader started
2019-08-22T12:08:13.508Z INFO cfgfile/reload.go:205 Loading of config files completed.
2019-08-22T12:08:23.507Z INFO log/harvester.go:255 Harvester started for file: /guia_instances/homologacao/server.log
fatal error: runtime: out of memory
runtime stack:...
And this is my current filebeat.yml file:
filebeat.inputs:
- type: log
enabled: true
tail_files: true
paths:
- /instances/testing/*.log
exclude_lines: ['\"severity\":\"DEBUG\"']
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.logstash:
hosts: ["elk-server:5443"]
ssl.certificate_authorities: ["/etc/filebeat/logstash-forwarder.crt"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
Some additional informations:
- Version: filebeat version 6.8.1 (amd64), libbeat 6.8.1
- OS: Ubuntu 14.04.5
- Log file:
No multi-lines
Biggest lines are 0.5MB long
Currently the size of the file is 3.6GB - The log file is truncated every once in a while or when the application restarts
- I omitted the full runtime stack, but if needed I can post here
- I tried to add more keys into the exclude_lines option, to no avail
This is the free memory that I usually have (in MB):
total used free shared buffers cached
Mem: 30664 24993 5670 1 122 409
-/+ buffers/cache: 24461 6202
Swap: 4095 4054 41
Is there anything I'm doing wrong or is the memory available of around 5GB not enough?
Thanks.