I'm using Logstash to extract data from Database and send data to Elasticsearch.
Everything works fine; data is well processed and sent to Elasticsearch without loss.
The problem, however, is burden on the server.
I'm running four logstash.conf files on AWS ec2 instance.
I checked process viewer and found out that logstash files are eating too much memory.
Please refer to following screenshot.
Hmm. Looking closer at the screenshot I'm not sure it's so alarming. It's using a lot of virtual address space, but not much is resident. Are we looking at different threads of the same JVM process or are you actually running dozens of Logstash processes?
Are we looking at different threads of the same JVM process or are you actually running dozens of Logstash processes?
running 4 logstash processes.
As I've mentioned, I got 4 logstash conf files that looks like the one I uploaded.
Then on the server, I run following command to run them in the background.
But I don't think the number of logstash file matters a lot.
I checked heap memory while running only one logstash but got the same error, 'HeapDumpOnOutOfMemory'.
I'll be looking forward to hearing from you.
Best
Gee
------UPDATED------
If you're talking about JVM, yes each logstash is producong approximately 10 threads, thus provoking 'HeapDumpOnOutOfMemoryError'.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.