Logstash HeapDumpOnOutOfMemoryError


I'm using Logstash to extract data from Database and send data to Elasticsearch.
Everything works fine; data is well processed and sent to Elasticsearch without loss.

The problem, however, is burden on the server.
I'm running four logstash.conf files on AWS ec2 instance.
I checked process viewer and found out that logstash files are eating too much memory.
Please refer to following screenshot.

Any comment or feedback would be immensely helpful.


What does your configuration look like?

Thanks for the comment @magnusbaeck.

What do you mean by configuration?

If it's regarding logstash.yml, I haven't change any.
I've installed logstash-5.2.0, so it should be the default configuration.

I'm using elastic cloud(5.1.1), and same as above, I haven't changed any.

server (aws ec2)

  • memory : 4G
  • cpu cores : 2

I'm attaching the result of jvmtop command for your reference.



What do you mean by configuration?

Your Logstash configuration files (typically /etc/logstash/conf.d/*).

As I mentioned above, I haven't changed any since the installation.

I've attached all my configuration files via google drive.



I'm asking for the (four?) files that you probably have in /etc/logstash/conf.d/*.

input {
    jdbc {
        jdbc_validate_connection => true
        jdbc_connection_string => "jdbc:oracle:thin:@HOST:PORT/SERVICE_NAME"
        jdbc_user => "USER_NAME"
        jdbc_password => "PASSWORD"
        jdbc_driver_library => "/Users/ojdbc7.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
        statement => "SELECT * FROM TABLE" # more complex

filter { # I'm using mutate, date, if filter

output {
    elasticsearch {
        index => "INDEX"
        documents_type => "TYPE"
        hosts => ["URL.ap-northeast-1.aws.found.io:9200/"]  #elasticcloud
        user => "ID"
        password => "PASSWORD"

Four configuration files share basic outline shown above.
Please let me know if you need more information to tackle this problem.



Hmm. Looking closer at the screenshot I'm not sure it's so alarming. It's using a lot of virtual address space, but not much is resident. Are we looking at different threads of the same JVM process or are you actually running dozens of Logstash processes?

Are we looking at different threads of the same JVM process or are you actually running dozens of Logstash processes?

running 4 logstash processes.

As I've mentioned, I got 4 logstash conf files that looks like the one I uploaded.
Then on the server, I run following command to run them in the background.

nohup bin/logstash -f logstash1.conf &
nohup bin/logstash -f logstash2.conf &
nohup bin/logstash -f logstash3.conf &
nohup bin/logstash -f logstash4.conf &

But I don't think the number of logstash file matters a lot.
I checked heap memory while running only one logstash but got the same error, 'HeapDumpOnOutOfMemory'.

I'll be looking forward to hearing from you.




If you're talking about JVM, yes each logstash is producong approximately 10 threads, thus provoking 'HeapDumpOnOutOfMemoryError'.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.