APM client drop queue if memory usage too big

I am using npm package "elastic-apm-node" (APM agent) to log koa server requests and mysql quries.
The problem is that sometimes the APM server is unavailable.
When I do some big mysql queries the memory fills up quite quickly because the APM agent keeps all the logs in a queue.
Is there an option to drop the queue when memory usage gets too big?

By default the Node.js agent will keep transactions in a memory queue for up to 10 seconds before they are sent to the APM Server. If you're creating a lot of transactions in those 10 seconds that can lead to a lot of objects being kept in memory as you experienced.

There's two ways to configure when this queue is flushed and sent to the APM Server:

  • You can change the flushInterval config option to a lower number, e.g. 5 seconds
  • You can set a maximum limit to the number of transaction you wish to keep in the queue before flushing using the maxQueueSize config option. By default there's no limit to this, but if you set it to for instance 100, then it will flush the queue when it contains a 100 transactions, even if the flushInterval has not transpired yet

While trying this, I'd also like to know more about the memory issues you're experiencing. That way we can make sure the agent is not consuming more memory than necessary.

  • Does your process run out of memory and crash?
  • Which command line arguments are you providing to the node runtime to control how much memory is available to the process?
  • How much memory do your application normally use and how much does it consume with the agent enabled?


Today I did a few things:

  • I updated "elastic-apm-node" npm package from 0.12.0 to 1.0.2
  • started the APM server so that it is running and successfully receiving the packets
  • configured APM agent with the two parameters you suggested:
let apm = require('elastic-apm-node').start({
	// Set required service name (allowed characters: a-z, A-Z, 0-9, -, _, and space)
	serviceName: 'My App',

	// Use if APM Server requires a token
	secretToken: '',

	// Set custom APM Server URL (default: http://localhost:8200)
	serverUrl: 'http://log.mydomain.com:8200',

	flushInterval: 5,
	maxQueueSize: 5,

But the memory leak still persists.

To answer your questions:

  • my process runs out of memory but does not crash (it goes up to 1.6 GiB). Only side effect is that the API calls that require a little bit more memory start being unresponsive
  • I don't think I use any command line arguments to control how much memory is available to the process. I am using VS Code for debugging and pm2 process manager in production. Both yield the same results
  • the application normally consumes about 50 - 200 MiB. When enabling the APM agent it goes up to 1.6GiB and stays there

Thanks for investigating and the detailed response. I'll look into this and get aback to you here as soon as I know more.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.

It has taken some time to pin down, but we think that the issue is related the APM Server being under a lot of load. If it can't keep up with the data being sent to it, it might grind to a halt which causes new sockets being opened to never close. If the load continues, this will eat up all your memory.

We're working on a fix for this in the APM Server and the next version of the Node.js agent will also have an serverTimeout option that defaults to 30 seconds.

In general we're also working on reducing the resources required to run the APM Server, but in any case, it's recommendable to run multiple APM Servers behind a load balancer in high throughput environments. For more details see: https://www.elastic.co/guide/en/apm/server/6.2/high-availability.html

I hope this is an acceptable solution for now and we are really interested in hearing if this solves your problem.

We just released version 1.2.1 of the Node.js agent which fixes a memory leak which I think might be the one you experienced. Please give it a try :slight_smile: