Filebeat 6.3.0 is taking too much of memory on windows 2008R2


I've set up a filebeat service(version 6.3.0) running on windows2008 R2.
I found the memory has gone up all the time
Does filebeat have potential memory leaks?

Rotated file:


events: 8192
flush.min_events: 2048
flush.timeout: 2s

scan_frequency: 1
close_eof: true
harvester_buffer_size: 163840

hosts: ["", "", ""]
topic: 'FilebeatTest'
reachable_only: false
required_acks: 1
compression: none
max_message_bytes: 1000000
work: 4
bulk_max_size: 20480
channel_buffer_size: 256000

Hello @Junble,

I did a quick dive into the code to know where the values are coming from.

Process Total is the total allocated memory over time.

	// TotalAlloc is cumulative bytes allocated for heap objects.
	// TotalAlloc increases as heap objects are allocated, but
	// unlike Alloc and HeapAlloc, it does not decrease when
	// objects are freed.

Note it does not decrease when object are freed.

Active is Alloc.

	// Alloc is bytes of allocated heap objects.
	// This is the same as HeapAlloc (see below).
	Alloc uint64

If you use a tool like process explorer you will get a bit more information about the actual memory of the process.

So I don't think there is a leak, but I do believe the value of "Process total" is a bit confusing.

Hello @pierhugues,

uptime a day, Filebeat.exe :


@Junble This is a lot of physical memory.

Is the process crashes at some point or the memory keep growing?

I am looking at your configuration and I am trying to understand where the memory goes.
I see a lot of options that diverge from the default configuration that could potentially increase the memory usage. The options can increase performance but its at the price of increased memory usage.

I would like to see the memory usage from a default configuration so have something to compare with., this option is the maximum of events in memory, default is 4096, your values 8196.

harvester_buffer_size, This is the buffer read size from the harvester, default is 16384 your value is 163840

output.kafka.bulk_max_size, This is the maximum bulk size , default is 2048 your value is 20480

channel_buffer_size, Per Kafka broker number of messages buffered in output pipeline. The default is 256, your value is 256000.

The channel_buffer_size this is a really high value vs the default, can you revert the options above to their default values and report the memory usage?

Hi @pierhugues
Thank you !
Sorry for my late reply.
I have been sick these days.
No crash,all memory has been used!

I'm going to do the other test as you say!

Hi, @pierhugues

By default values ,the same result :



pprof result :

inuse_space or alloc_objects, memory is going up all the time!

1:go tool pprof --inuse_space

2:go tool pprof --alloc_objects

Hello @Junble Can you start Filebeat using the built in http profiler? The command will look like this:

./filebeat -httpprof localhost:6060

let it run for a good amount of time and use your browser to go to the url http://localhost:6060/debug/pprof/heap and attach the heap dump here. I will be able to play with it.

We have an open PR for a goroutine leak, but I don't know if it's the same problem as you.

After talking with @adrisr it look like to be this issue

it should be fixed in the next release

Hello, @pierhugues
Thank you very much!
Thank you messages for team members!
All the best messages,looking forward to the next updates.

My test scenario is the producer have been creating files:
One harvester is started for each file, there's a lot of harvesters

That's what I did,started Filebeat with command:. /filebeat -httpprof localhost:6060
But use my browser to go to the url: http://localhost:6060/debug/pprof/heap
Unreadable code ,what do I need to install?

Uptime a day ,the result:

Hello @pierhugues @adrisr
Up time 31h, I got it, the heap.gz download url :

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.