I want to recreate the java heap memory issue which is happening in production

Testing in windows instance
C:\monitoring\logstash\logstash-8.15.1\config\jvm.options
Change the below parameters

JVM configuration

Xms represents the initial size of total heap space

Xmx represents the maximum size of total heap space

jvm.options file

-Xms64m
-Xmx64m

Below is the configuration for generating logs continuously

                input {
  generator {
    lines => [
      "Test log message 1: INFO - test event 12234 #123",
      "Test log message 2: WARN - Data batch #123 missing some fields",
      "Test log message 3: ERROR - Failed to process data batch #123"
    ]
    count => -1      
   
  }
}

filter {
  mutate {
    add_field => { "generated_at" => "%{@timestamp}" }
  }
}

output {
  stdout {
    codec => "rubydebug"
  }
}

Almost the generated logs are run for 8 +hours ingesting log events in 1000+ but could not recreate the issue.

I understand that I can increase the heap memory size but I need this to approve from my team for approval.

Initially I had 1G as heap memory , to recreate I downsized the memory 64M.

Please advise

If you want to recreate a memory leak from another environment then you will have to know what is causing the memory leak. To do that you will need heap dumps from the production JVM when it runs out of memory.

Just running logstash with a generator input is unlikely to trigger any memory leaks.

1 Like

Complete error from logstash-plain.log

[2024-11-15T13:27:44,917][ERROR][logstash.outputs.elasticsearch][main][6a5094c9605622b56a8d911c2d0dd9dee27197919122ff10ebef00b8ff978a27] Encountered a retryable error (will retry with exponential backoff) {:code=>503, :url=>"https://*********************************:443/_bulk", :content_length=>4514}
[2024-11-15T19:35:45,263][FATAL][org.logstash.Logstash    ] uncaught error (in thread pool-348-thread-1)
java.lang.OutOfMemoryError: Java heap space
[2024-11-15T19:35:45,811][FATAL][org.logstash.Logstash    ][main][20d507e6459bc4bdb2ebc74148b31fcb57cc2469f5a0a520f54dcfd4176ccb85] uncaught error (in thread [main]<file)
java.lang.OutOfMemoryError: Java heap space



[2024-11-14T13:10:03,688][ERROR][logstash.outputs.elasticsearch][main][6a5094c9605622b56a8d911c2d0dd9dee27197919122ff10ebef00b8ff978a27] Encountered a retryable error (will retry with exponential backoff) {:code=>503, :url=>"*****************************", :content_length=>20052}
[2024-11-14T22:39:23,361][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}

Please advise. Should I increase java heap memory or reduce bulk index size