Filebeat crashing with out of memory

Hi, I'm trying to setup Beats for a new application, and Beats crashes after a few minutes of running, before any logs are even sent to Kafka.

The log file I'm reading from is pretty huge (32GB) and I think the problem is that Filebeat is attempting to load the whole file into memory. I can see the memory footprint of the filebeat process growing, until it fails with:

fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x16eacd5, 0x16)
	/usr/local/go/src/runtime/panic.go:616 +0x81
runtime.sysMap(0xcc696f0000, 0x223f80000, 0x20f4d00, 0x210e078)
	/usr/local/go/src/runtime/mem_linux.go:216 +0x20a
runtime.(*mheap).sysAlloc(0x20f4500, 0x223f80000, 0x0)
	/usr/local/go/src/runtime/malloc.go:470 +0xd4
runtime.(*mheap).grow(0x20f4500, 0x111fbe, 0x0)
	/usr/local/go/src/runtime/mheap.go:907 +0x60
runtime.(*mheap).allocSpanLocked(0x20f4500, 0x111fbe, 0x210e088, 0x7f8ee3ffed48)
	/usr/local/go/src/runtime/mheap.go:820 +0x301
runtime.(*mheap).alloc_m(0x20f4500, 0x111fbe, 0x9e0101, 0xc41d3fb7ff)
	/usr/local/go/src/runtime/mheap.go:686 +0x118
runtime.(*mheap).alloc.func1()
	/usr/local/go/src/runtime/mheap.go:753 +0x4d
runtime.(*mheap).alloc(0x20f4500, 0x111fbe, 0x7f8ee3000101, 0x7f8ee3ffee10)
	/usr/local/go/src/runtime/mheap.go:752 +0x8a
runtime.largeAlloc(0x223f7c000, 0x7f8ee3ff0100, 0x7ffc40dbda67)
	/usr/local/go/src/runtime/malloc.go:826 +0x94
runtime.mallocgc.func1()
	/usr/local/go/src/runtime/malloc.go:721 +0x46
runtime.systemstack(0x0)
	/usr/local/go/src/runtime/asm_amd64.s:409 +0x79
runtime.mstart()
	/usr/local/go/src/runtime/proc.go:1175

While I realise this is being called by a huge logfile, does beats have any intelligent handling for large files? ie, read a 100MB buffer, send the logs, discard the buffer, read the next set of lines?

@damianconnolly Filebeat can handle file of any size, and it does exactly what you are saying and read file in small chunk.

So let's try to debug this situation, can you do or answer the following steps:

  • Can you share the your configuration?
  • Are you using multiline?
  • Is the log file events delimited by a new line or its a single continuous line?

Hi Pier, thanks for the quick response.

I don't use multiline (each line is a separate event), and the lines are around 200 characters long each.

I'm away from my laptop now but will post my file beat config as soon as I'm back online

Hi, here's my filebeat.yml:

   filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/tomcat8/tc1/ppstatistics.log

  exclude_lines: ['^DEBUG']

  fields:
    type: gcprice
    env: prd
  fields_under_root: true



filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

logging.level: info
logging_to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat.log
  keepfiles: 7
  permissions: 0644
  rotateeverybytes: 10485760

output:
   kafka:
       compression: gzip
       hosts:
       - lonstct01app8:9092
       max_message_bytes: 9999999
       topic: gcprice
       version: 0.10.0

Hi, I'm still getting this problem. The filebeat service rarely lives for more than a few minutes, and never successfully starts pushing files to Kafka

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.